Importing libraries¶

In [1]:
import numpy as np
import matplotlib.pyplot as plt
import random
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD, Adam
from keras.layers import Flatten,Dense,BatchNormalization,Activation,Dropout
from keras.callbacks import ReduceLROnPlateau
from sklearn.metrics import classification_report, confusion_matrix, ConfusionMatrixDisplay

Questão 1¶

Utilize redes neurais perceptron de múltiplas camadas para aproximar as funções abaixo. Apresente um gráfico com a curva da função analítica e a curva da função aproximada pela rede neural. Apresente também a curva do erro médio de treinamento com relação ao número de épocas e a curva do erro médio com o conjunto de validação. Procure definir para cada função a arquitetura da rede neural perceptron, isto é, o número de entradas, o número de neurônios em cada camada e o número de neurônios camada de saída.

Observações: Como se trata de um problema de aproximação de funções, considere a ca- mada de saída do tipo linear puro. Isto é, φ(v)=v, onde v é o potencial de ativação.

a)¶

$f(x_1,x_2) = (1 -x_1)^2 + 100 (x_2 - (x_1)^2)^2$ com $-10 ≤ x_1 ≤ 10, -10 ≤ x_2 ≤10$

In [ ]:
# function's definition
def f(x1,x2):
  return (1 - x1)**2 + 100*(x2 - (x1)**2)**2
In [ ]:
# generating points
x1, x2 = np.meshgrid(np.linspace(-10, 10, 100), np.linspace(-10, 10, 100))
y = f(x1,x2)
In [ ]:
fig, ax = plt.subplots(figsize=(10, 7), subplot_kw=dict(projection='3d'))

ax.plot_surface(x1, x2, y)

ax.set(
    xlabel='$x_1$',
    ylabel='$x_2$',
    zlabel='$f(x_1, x_2)$'
)

plt.tight_layout()
plt.show()
In [ ]:
# split training and test
x_train, x_test, y_train, y_test = train_test_split(
    np.vstack([x1.flatten(), x2.flatten()]).T, 
    y.flatten(), 
    test_size=0.2, 
    random_state=42
)
In [ ]:
fig, ax = plt.subplots(figsize=(10, 7), subplot_kw=dict(projection='3d'))

ax.plot_wireframe(x1, x2, y, linewidths=0.5, color='lightgrey')
ax.scatter(x_train[:,0], x_train[:,1], y_train, s=1, color='darkorange', label='Training data')
ax.scatter(x_test[:,0], x_test[:,1], y_test, s=5, color='darkgreen', label='Test data')

ax.set(
    xlabel='$x_1$',
    ylabel='$x_2$',
    zlabel='$f(x_1, x_2)$'
)

plt.legend()
plt.tight_layout()
plt.show()
In [ ]:
# created scaler
scaler = StandardScaler()

y_train = y_train.reshape(-1,1)
y_test = y_test.reshape(-1,1)

# fit normalizer on training dataset
scaler.fit(y_train)
 
# transform training dataset
y_train = scaler.transform(y_train)
 
# transform test dataset
y_test = scaler.transform(y_test)
In [ ]:
# verifying shapes
print("X train shape: ", x_train.shape)
print("Y train shape: ", y_train.shape)
print("X test shape: ", x_test.shape)
print("Y test shape: ", y_test.shape)
X train shape:  (8000, 2)
Y train shape:  (8000, 1)
X test shape:  (2000, 2)
Y test shape:  (2000, 1)
In [ ]:
mlp = Sequential([
    Dense(64, activation='relu', input_shape=(2,)),
    Dense(32, activation='relu'),
    Dense(16, activation='relu'),
    Dense(8, activation='relu'),
    Dense(4, activation='relu'),
    Dense(1, activation='linear')
])

mlp.compile(
    loss='mean_squared_error',
    optimizer='adam'
)

mlp.summary()
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_6 (Dense)             (None, 64)                192       
                                                                 
 dense_7 (Dense)             (None, 32)                2080      
                                                                 
 dense_8 (Dense)             (None, 16)                528       
                                                                 
 dense_9 (Dense)             (None, 8)                 136       
                                                                 
 dense_10 (Dense)            (None, 4)                 36        
                                                                 
 dense_11 (Dense)            (None, 1)                 5         
                                                                 
=================================================================
Total params: 2,977
Trainable params: 2,977
Non-trainable params: 0
_________________________________________________________________
In [ ]:
history = mlp.fit(
    x_train, y_train,
    batch_size=8,
    epochs=2000,
    validation_split=0.1,
    callbacks=[
        tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10),
        tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, min_lr=0.0001)
    ]
)
Epoch 1/2000
900/900 [==============================] - 2s 1ms/step - loss: 0.2403 - val_loss: 0.0481 - lr: 0.0010
Epoch 2/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0301 - val_loss: 0.0225 - lr: 0.0010
Epoch 3/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0200 - val_loss: 0.0144 - lr: 0.0010
Epoch 4/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0144 - val_loss: 0.0141 - lr: 0.0010
Epoch 5/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0098 - val_loss: 0.0073 - lr: 0.0010
Epoch 6/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0094 - val_loss: 0.0056 - lr: 0.0010
Epoch 7/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0102 - val_loss: 0.0024 - lr: 0.0010
Epoch 8/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0082 - val_loss: 0.0071 - lr: 0.0010
Epoch 9/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0055 - val_loss: 0.0031 - lr: 0.0010
Epoch 10/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0053 - val_loss: 0.0126 - lr: 0.0010
Epoch 11/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0051 - val_loss: 8.4492e-04 - lr: 0.0010
Epoch 12/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0045 - val_loss: 0.0091 - lr: 0.0010
Epoch 13/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0103 - val_loss: 7.3677e-04 - lr: 0.0010
Epoch 14/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0041 - val_loss: 0.0061 - lr: 0.0010
Epoch 15/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0034 - val_loss: 0.0015 - lr: 0.0010
Epoch 16/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0047 - val_loss: 0.0060 - lr: 0.0010
Epoch 17/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0028 - val_loss: 0.0046 - lr: 0.0010
Epoch 18/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0045 - val_loss: 0.0064 - lr: 0.0010
Epoch 19/2000
900/900 [==============================] - 1s 1ms/step - loss: 3.8528e-04 - val_loss: 3.2380e-04 - lr: 1.0000e-04
Epoch 20/2000
900/900 [==============================] - 1s 1ms/step - loss: 2.4806e-04 - val_loss: 2.2849e-04 - lr: 1.0000e-04
Epoch 21/2000
900/900 [==============================] - 1s 1ms/step - loss: 2.4695e-04 - val_loss: 2.0553e-04 - lr: 1.0000e-04
Epoch 22/2000
900/900 [==============================] - 1s 1ms/step - loss: 2.3648e-04 - val_loss: 1.9860e-04 - lr: 1.0000e-04
Epoch 23/2000
900/900 [==============================] - 1s 1ms/step - loss: 2.3406e-04 - val_loss: 1.8407e-04 - lr: 1.0000e-04
Epoch 24/2000
900/900 [==============================] - 1s 1ms/step - loss: 2.1895e-04 - val_loss: 2.1349e-04 - lr: 1.0000e-04
Epoch 25/2000
900/900 [==============================] - 1s 1ms/step - loss: 2.5439e-04 - val_loss: 2.7080e-04 - lr: 1.0000e-04
Epoch 26/2000
900/900 [==============================] - 1s 1ms/step - loss: 2.3294e-04 - val_loss: 3.1382e-04 - lr: 1.0000e-04
Epoch 27/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.9288e-04 - val_loss: 2.5049e-04 - lr: 1.0000e-04
Epoch 28/2000
900/900 [==============================] - 2s 2ms/step - loss: 2.0941e-04 - val_loss: 1.4517e-04 - lr: 1.0000e-04
Epoch 29/2000
900/900 [==============================] - 2s 2ms/step - loss: 2.0231e-04 - val_loss: 1.7252e-04 - lr: 1.0000e-04
Epoch 30/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.8211e-04 - val_loss: 1.2884e-04 - lr: 1.0000e-04
Epoch 31/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.8255e-04 - val_loss: 1.1797e-04 - lr: 1.0000e-04
Epoch 32/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.5825e-04 - val_loss: 2.3869e-04 - lr: 1.0000e-04
Epoch 33/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.8320e-04 - val_loss: 1.7549e-04 - lr: 1.0000e-04
Epoch 34/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.5552e-04 - val_loss: 1.2200e-04 - lr: 1.0000e-04
Epoch 35/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.6122e-04 - val_loss: 1.1288e-04 - lr: 1.0000e-04
Epoch 36/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.6115e-04 - val_loss: 1.4661e-04 - lr: 1.0000e-04
Epoch 37/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.7000e-04 - val_loss: 1.8074e-04 - lr: 1.0000e-04
Epoch 38/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.5897e-04 - val_loss: 1.3702e-04 - lr: 1.0000e-04
Epoch 39/2000
900/900 [==============================] - 2s 2ms/step - loss: 1.8850e-04 - val_loss: 9.9578e-05 - lr: 1.0000e-04
Epoch 40/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.8244e-04 - val_loss: 1.0522e-04 - lr: 1.0000e-04
Epoch 41/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3757e-04 - val_loss: 1.5331e-04 - lr: 1.0000e-04
Epoch 42/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3224e-04 - val_loss: 2.4568e-04 - lr: 1.0000e-04
Epoch 43/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.6857e-04 - val_loss: 1.4375e-04 - lr: 1.0000e-04
Epoch 44/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3927e-04 - val_loss: 3.8073e-04 - lr: 1.0000e-04
Epoch 45/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.6187e-04 - val_loss: 4.2486e-04 - lr: 1.0000e-04
Epoch 46/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.4784e-04 - val_loss: 1.7491e-04 - lr: 1.0000e-04
Epoch 47/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.5885e-04 - val_loss: 1.2637e-04 - lr: 1.0000e-04
Epoch 48/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3876e-04 - val_loss: 9.0454e-05 - lr: 1.0000e-04
Epoch 49/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.7796e-04 - val_loss: 1.3724e-04 - lr: 1.0000e-04
Epoch 50/2000
900/900 [==============================] - 2s 2ms/step - loss: 1.3802e-04 - val_loss: 1.1495e-04 - lr: 1.0000e-04
Epoch 51/2000
900/900 [==============================] - 2s 2ms/step - loss: 1.4784e-04 - val_loss: 1.0661e-04 - lr: 1.0000e-04
Epoch 52/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2565e-04 - val_loss: 9.0812e-05 - lr: 1.0000e-04
Epoch 53/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.4147e-04 - val_loss: 1.0544e-04 - lr: 1.0000e-04
Epoch 54/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.4119e-04 - val_loss: 5.1870e-04 - lr: 1.0000e-04
Epoch 55/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.4186e-04 - val_loss: 7.9401e-05 - lr: 1.0000e-04
Epoch 56/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.7324e-04 - val_loss: 9.3971e-05 - lr: 1.0000e-04
Epoch 57/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3775e-04 - val_loss: 1.4554e-04 - lr: 1.0000e-04
Epoch 58/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2992e-04 - val_loss: 9.5643e-05 - lr: 1.0000e-04
Epoch 59/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2934e-04 - val_loss: 7.1712e-05 - lr: 1.0000e-04
Epoch 60/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3317e-04 - val_loss: 1.5182e-04 - lr: 1.0000e-04
Epoch 61/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2192e-04 - val_loss: 9.4130e-05 - lr: 1.0000e-04
Epoch 62/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2367e-04 - val_loss: 1.1338e-04 - lr: 1.0000e-04
Epoch 63/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.5802e-04 - val_loss: 1.1068e-04 - lr: 1.0000e-04
Epoch 64/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1099e-04 - val_loss: 1.8395e-04 - lr: 1.0000e-04
Epoch 65/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2799e-04 - val_loss: 6.3500e-05 - lr: 1.0000e-04
Epoch 66/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2981e-04 - val_loss: 1.4402e-04 - lr: 1.0000e-04
Epoch 67/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2969e-04 - val_loss: 6.7762e-05 - lr: 1.0000e-04
Epoch 68/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3877e-04 - val_loss: 9.6731e-05 - lr: 1.0000e-04
Epoch 69/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1277e-04 - val_loss: 1.3771e-04 - lr: 1.0000e-04
Epoch 70/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1472e-04 - val_loss: 1.2268e-04 - lr: 1.0000e-04
Epoch 71/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1190e-04 - val_loss: 7.2661e-05 - lr: 1.0000e-04
Epoch 72/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3607e-04 - val_loss: 1.3418e-04 - lr: 1.0000e-04
Epoch 73/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1247e-04 - val_loss: 2.3557e-04 - lr: 1.0000e-04
Epoch 74/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1410e-04 - val_loss: 9.0923e-05 - lr: 1.0000e-04
Epoch 75/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1888e-04 - val_loss: 2.2146e-04 - lr: 1.0000e-04
In [ ]:
fig, ax = plt.subplots(figsize=(10, 7))

ax.plot(history.history['loss'], label='Training Loss')
ax.plot(history.history['val_loss'], label='Validation Loss')

ax.set(
    title='Processo de otimização das funções de perda',
    ylabel='Loss',
    xlabel='Epoch'
)

plt.legend()
plt.tight_layout()
plt.show()
In [ ]:
predictions = mlp.predict(x_test)
inverse_predictions = scaler.inverse_transform(predictions)
63/63 [==============================] - 0s 996us/step
In [ ]:
fig, ax = plt.subplots(figsize=(15, 9), subplot_kw=dict(projection='3d'))

ax.plot_wireframe(x1, x2, y, linewidths=0.5, color='lightgrey')
ax.scatter(x_test[:,0], x_test[:,1], scaler.inverse_transform(y_test), s=14, color='C0', label='Training data')
ax.scatter(x_test[:,0], x_test[:,1], inverse_predictions, s=15, marker='^', color='C1', label='Test data')

ax.set(
    xlabel='$x_1$',
    ylabel='$x_2$',
    zlabel='$f(x_1, x_2)$'
)

plt.legend()
plt.tight_layout()
plt.show()
In [ ]:
mse = mean_squared_error(scaler.inverse_transform(y_test), inverse_predictions)
rmse = mean_squared_error(scaler.inverse_transform(y_test), inverse_predictions, squared=False)
mae = mean_absolute_error(scaler.inverse_transform(y_test), inverse_predictions)
In [ ]:
print(f"Mean Squared Error: {mse}")
print(f"Root Mean Squared Error: {rmse}")
print(f"Mean Absolute Error: {mae}")
Mean Squared Error: 16880097.554985106
Root Mean Squared Error: 4108.5395890736045
Mean Absolute Error: 2477.902080862183

b)¶

$x_1^2 + x_2^2 + 2x_1x_2cos(\pi x_1x_2) + x_1 + x_2 -1$ com $|x_1| ≤1 , |x_2| ≤ 1$

In [ ]:
# function's definition
def g(x1,x2):
  return x1**2 + x2**2 + 2 * x1 * x2 * np.cos(np.pi * x1 * x2) + x1 + x2 - 1
In [ ]:
# generating points
x1, x2 = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))
y = g(x1,x2)
In [ ]:
# plotting surface
fig, ax = plt.subplots(figsize=(10, 7), subplot_kw=dict(projection='3d'))
ax.plot_surface(x1, x2, y)

ax.set(
    xlabel='$x_1$',
    ylabel='$x_2$',
    zlabel='$f(x_1, x_2)$'
)

plt.tight_layout()
plt.show()
In [ ]:
# split train test
x_train, x_test, y_train, y_test = train_test_split(
    np.vstack([x1.flatten(), x2.flatten()]).T, 
    y.flatten(), 
    test_size=0.2, 
    random_state=505
)
In [ ]:
# surface os points to test and to train
fig, ax = plt.subplots(figsize=(10, 7), subplot_kw=dict(projection='3d'))

ax.plot_wireframe(x1, x2, y, linewidths=0.5, color='lightgrey')
ax.scatter(x_train[:,0], x_train[:,1], y_train, s=1, color='darkorange', label='Training data')
ax.scatter(x_test[:,0], x_test[:,1], y_test, s=5, color='darkgreen', label='Test data')

ax.set(
    xlabel='$x_1$',
    ylabel='$x_2$',
    zlabel='$f(x_1, x_2)$'
)

plt.legend()
plt.tight_layout()
plt.show()
In [ ]:
mlp = Sequential([
    Dense(64, activation='relu', input_shape=(2,)),
    Dense(32, activation='relu'),
    Dense(16, activation='relu'),
    Dense(8, activation='relu'),
    Dense(4, activation='relu'),
    Dense(1, activation='linear')
])

mlp.compile(
    loss='mean_squared_error',
    optimizer='adam'
)

mlp.summary()
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_18 (Dense)            (None, 64)                192       
                                                                 
 dense_19 (Dense)            (None, 32)                2080      
                                                                 
 dense_20 (Dense)            (None, 16)                528       
                                                                 
 dense_21 (Dense)            (None, 8)                 136       
                                                                 
 dense_22 (Dense)            (None, 4)                 36        
                                                                 
 dense_23 (Dense)            (None, 1)                 5         
                                                                 
=================================================================
Total params: 2,977
Trainable params: 2,977
Non-trainable params: 0
_________________________________________________________________
In [ ]:
# training the model
history = mlp.fit(
    x_train, y_train,
    batch_size=8,
    epochs=2000,
    validation_split=0.1,
    callbacks=[
        tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10),
        tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, min_lr=0.0001)
    ]
)
Epoch 1/2000
900/900 [==============================] - 2s 1ms/step - loss: 0.3755 - val_loss: 0.1831 - lr: 0.0010
Epoch 2/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.1221 - val_loss: 0.0754 - lr: 0.0010
Epoch 3/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0488 - val_loss: 0.0392 - lr: 0.0010
Epoch 4/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0238 - val_loss: 0.0227 - lr: 0.0010
Epoch 5/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0188 - val_loss: 0.0208 - lr: 0.0010
Epoch 6/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0147 - val_loss: 0.0175 - lr: 0.0010
Epoch 7/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0147 - val_loss: 0.0202 - lr: 0.0010
Epoch 8/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0169 - val_loss: 0.0165 - lr: 0.0010
Epoch 9/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0135 - val_loss: 0.0165 - lr: 0.0010
Epoch 10/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0148 - val_loss: 0.0140 - lr: 0.0010
Epoch 11/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0100 - val_loss: 0.0106 - lr: 0.0010
Epoch 12/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0096 - val_loss: 0.0085 - lr: 0.0010
Epoch 13/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0058 - val_loss: 0.0064 - lr: 0.0010
Epoch 14/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0059 - val_loss: 0.0047 - lr: 0.0010
Epoch 15/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0049 - val_loss: 0.0043 - lr: 0.0010
Epoch 16/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0033 - val_loss: 0.0029 - lr: 0.0010
Epoch 17/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0027 - val_loss: 0.0028 - lr: 0.0010
Epoch 18/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0024 - val_loss: 0.0037 - lr: 0.0010
Epoch 19/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0030 - val_loss: 0.0022 - lr: 0.0010
Epoch 20/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0035 - val_loss: 0.0016 - lr: 0.0010
Epoch 21/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0013 - val_loss: 0.0039 - lr: 0.0010
Epoch 22/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0017 - val_loss: 0.0012 - lr: 0.0010
Epoch 23/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0017 - val_loss: 0.0011 - lr: 0.0010
Epoch 24/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0017 - val_loss: 0.0026 - lr: 0.0010
Epoch 25/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0010 - val_loss: 0.0046 - lr: 0.0010
Epoch 26/2000
900/900 [==============================] - 2s 2ms/step - loss: 0.0016 - val_loss: 8.2744e-04 - lr: 0.0010
Epoch 27/2000
900/900 [==============================] - 2s 2ms/step - loss: 0.0014 - val_loss: 0.0011 - lr: 0.0010
Epoch 28/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0018 - val_loss: 7.3059e-04 - lr: 0.0010
Epoch 29/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0013 - val_loss: 0.0021 - lr: 0.0010
Epoch 30/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0013 - val_loss: 5.2872e-04 - lr: 0.0010
Epoch 31/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.0995e-04 - val_loss: 7.3022e-04 - lr: 0.0010
Epoch 32/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0019 - val_loss: 0.0014 - lr: 0.0010
Epoch 33/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.4458e-04 - val_loss: 9.2240e-04 - lr: 0.0010
Epoch 34/2000
900/900 [==============================] - 1s 1ms/step - loss: 0.0019 - val_loss: 7.6538e-04 - lr: 0.0010
Epoch 35/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.7375e-04 - val_loss: 5.7007e-04 - lr: 0.0010
Epoch 36/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.5156e-04 - val_loss: 3.2450e-04 - lr: 1.0000e-04
Epoch 37/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.4211e-04 - val_loss: 3.0920e-04 - lr: 1.0000e-04
Epoch 38/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3987e-04 - val_loss: 3.2047e-04 - lr: 1.0000e-04
Epoch 39/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.4100e-04 - val_loss: 2.8895e-04 - lr: 1.0000e-04
Epoch 40/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3653e-04 - val_loss: 3.0373e-04 - lr: 1.0000e-04
Epoch 41/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3535e-04 - val_loss: 2.6592e-04 - lr: 1.0000e-04
Epoch 42/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3654e-04 - val_loss: 2.6799e-04 - lr: 1.0000e-04
Epoch 43/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.3816e-04 - val_loss: 2.5570e-04 - lr: 1.0000e-04
Epoch 44/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2641e-04 - val_loss: 2.9339e-04 - lr: 1.0000e-04
Epoch 45/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2121e-04 - val_loss: 3.2909e-04 - lr: 1.0000e-04
Epoch 46/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.2684e-04 - val_loss: 2.3325e-04 - lr: 1.0000e-04
Epoch 47/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1775e-04 - val_loss: 2.6653e-04 - lr: 1.0000e-04
Epoch 48/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1174e-04 - val_loss: 2.3168e-04 - lr: 1.0000e-04
Epoch 49/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1471e-04 - val_loss: 2.5295e-04 - lr: 1.0000e-04
Epoch 50/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.1344e-04 - val_loss: 2.6298e-04 - lr: 1.0000e-04
Epoch 51/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.0876e-04 - val_loss: 2.0776e-04 - lr: 1.0000e-04
Epoch 52/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.0718e-04 - val_loss: 2.1121e-04 - lr: 1.0000e-04
Epoch 53/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.0307e-04 - val_loss: 2.0588e-04 - lr: 1.0000e-04
Epoch 54/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.0388e-04 - val_loss: 1.9177e-04 - lr: 1.0000e-04
Epoch 55/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.0178e-04 - val_loss: 2.0274e-04 - lr: 1.0000e-04
Epoch 56/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.0178e-04 - val_loss: 2.2856e-04 - lr: 1.0000e-04
Epoch 57/2000
900/900 [==============================] - 1s 1ms/step - loss: 1.0126e-04 - val_loss: 2.3026e-04 - lr: 1.0000e-04
Epoch 58/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.3505e-05 - val_loss: 2.0378e-04 - lr: 1.0000e-04
Epoch 59/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.6313e-05 - val_loss: 1.8932e-04 - lr: 1.0000e-04
Epoch 60/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.1047e-05 - val_loss: 1.8083e-04 - lr: 1.0000e-04
Epoch 61/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.8763e-05 - val_loss: 1.8094e-04 - lr: 1.0000e-04
Epoch 62/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.1101e-05 - val_loss: 1.8211e-04 - lr: 1.0000e-04
Epoch 63/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.0412e-05 - val_loss: 1.9709e-04 - lr: 1.0000e-04
Epoch 64/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.3721e-05 - val_loss: 2.6898e-04 - lr: 1.0000e-04
Epoch 65/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.3494e-05 - val_loss: 1.8946e-04 - lr: 1.0000e-04
Epoch 66/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.1262e-05 - val_loss: 1.7357e-04 - lr: 1.0000e-04
Epoch 67/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.6589e-05 - val_loss: 1.6051e-04 - lr: 1.0000e-04
Epoch 68/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.8745e-05 - val_loss: 1.6204e-04 - lr: 1.0000e-04
Epoch 69/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.2716e-05 - val_loss: 1.5193e-04 - lr: 1.0000e-04
Epoch 70/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.5516e-05 - val_loss: 1.4889e-04 - lr: 1.0000e-04
Epoch 71/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.7297e-05 - val_loss: 1.4578e-04 - lr: 1.0000e-04
Epoch 72/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.0910e-05 - val_loss: 1.4944e-04 - lr: 1.0000e-04
Epoch 73/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.8776e-05 - val_loss: 1.7773e-04 - lr: 1.0000e-04
Epoch 74/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.2522e-05 - val_loss: 1.4596e-04 - lr: 1.0000e-04
Epoch 75/2000
900/900 [==============================] - 1s 1ms/step - loss: 8.4255e-05 - val_loss: 1.3306e-04 - lr: 1.0000e-04
Epoch 76/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.8852e-05 - val_loss: 1.6480e-04 - lr: 1.0000e-04
Epoch 77/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.2539e-05 - val_loss: 1.3074e-04 - lr: 1.0000e-04
Epoch 78/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.7328e-05 - val_loss: 1.2497e-04 - lr: 1.0000e-04
Epoch 79/2000
900/900 [==============================] - 1s 1ms/step - loss: 9.1113e-05 - val_loss: 1.4063e-04 - lr: 1.0000e-04
Epoch 80/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.1518e-05 - val_loss: 1.1932e-04 - lr: 1.0000e-04
Epoch 81/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.8567e-05 - val_loss: 1.3577e-04 - lr: 1.0000e-04
Epoch 82/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.6544e-05 - val_loss: 1.2487e-04 - lr: 1.0000e-04
Epoch 83/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.4925e-05 - val_loss: 1.4861e-04 - lr: 1.0000e-04
Epoch 84/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.4571e-05 - val_loss: 1.4264e-04 - lr: 1.0000e-04
Epoch 85/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.9797e-05 - val_loss: 1.1468e-04 - lr: 1.0000e-04
Epoch 86/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.4416e-05 - val_loss: 1.3066e-04 - lr: 1.0000e-04
Epoch 87/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.3910e-05 - val_loss: 1.5394e-04 - lr: 1.0000e-04
Epoch 88/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.9360e-05 - val_loss: 1.1626e-04 - lr: 1.0000e-04
Epoch 89/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.7308e-05 - val_loss: 1.1188e-04 - lr: 1.0000e-04
Epoch 90/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.2721e-05 - val_loss: 1.1300e-04 - lr: 1.0000e-04
Epoch 91/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.7185e-05 - val_loss: 1.2724e-04 - lr: 1.0000e-04
Epoch 92/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.7134e-05 - val_loss: 1.1569e-04 - lr: 1.0000e-04
Epoch 93/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.8117e-05 - val_loss: 1.1088e-04 - lr: 1.0000e-04
Epoch 94/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.8846e-05 - val_loss: 1.3633e-04 - lr: 1.0000e-04
Epoch 95/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.7340e-05 - val_loss: 1.2110e-04 - lr: 1.0000e-04
Epoch 96/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.7974e-05 - val_loss: 1.7295e-04 - lr: 1.0000e-04
Epoch 97/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.7266e-05 - val_loss: 1.1850e-04 - lr: 1.0000e-04
Epoch 98/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.2741e-05 - val_loss: 1.2930e-04 - lr: 1.0000e-04
Epoch 99/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.4239e-05 - val_loss: 1.0425e-04 - lr: 1.0000e-04
Epoch 100/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.8227e-05 - val_loss: 1.0128e-04 - lr: 1.0000e-04
Epoch 101/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.1065e-05 - val_loss: 1.0580e-04 - lr: 1.0000e-04
Epoch 102/2000
900/900 [==============================] - 1s 1ms/step - loss: 7.2595e-05 - val_loss: 1.0268e-04 - lr: 1.0000e-04
Epoch 103/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.0114e-05 - val_loss: 1.3312e-04 - lr: 1.0000e-04
Epoch 104/2000
900/900 [==============================] - 1s 2ms/step - loss: 6.9939e-05 - val_loss: 1.1080e-04 - lr: 1.0000e-04
Epoch 105/2000
900/900 [==============================] - 2s 2ms/step - loss: 6.1671e-05 - val_loss: 9.3537e-05 - lr: 1.0000e-04
Epoch 106/2000
900/900 [==============================] - 2s 2ms/step - loss: 6.9090e-05 - val_loss: 1.0073e-04 - lr: 1.0000e-04
Epoch 107/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.0073e-05 - val_loss: 1.4321e-04 - lr: 1.0000e-04
Epoch 108/2000
900/900 [==============================] - 1s 1ms/step - loss: 5.8665e-05 - val_loss: 1.0323e-04 - lr: 1.0000e-04
Epoch 109/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.1377e-05 - val_loss: 2.0407e-04 - lr: 1.0000e-04
Epoch 110/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.5973e-05 - val_loss: 9.2555e-05 - lr: 1.0000e-04
Epoch 111/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.1832e-05 - val_loss: 1.1252e-04 - lr: 1.0000e-04
Epoch 112/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.1293e-05 - val_loss: 8.5153e-05 - lr: 1.0000e-04
Epoch 113/2000
900/900 [==============================] - 1s 1ms/step - loss: 5.6221e-05 - val_loss: 9.7618e-05 - lr: 1.0000e-04
Epoch 114/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.3836e-05 - val_loss: 9.4311e-05 - lr: 1.0000e-04
Epoch 115/2000
900/900 [==============================] - 1s 1ms/step - loss: 5.8413e-05 - val_loss: 1.3108e-04 - lr: 1.0000e-04
Epoch 116/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.1561e-05 - val_loss: 9.7013e-05 - lr: 1.0000e-04
Epoch 117/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.2772e-05 - val_loss: 1.0615e-04 - lr: 1.0000e-04
Epoch 118/2000
900/900 [==============================] - 1s 1ms/step - loss: 5.7437e-05 - val_loss: 1.3355e-04 - lr: 1.0000e-04
Epoch 119/2000
900/900 [==============================] - 1s 1ms/step - loss: 5.7079e-05 - val_loss: 9.6611e-05 - lr: 1.0000e-04
Epoch 120/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.2548e-05 - val_loss: 1.0475e-04 - lr: 1.0000e-04
Epoch 121/2000
900/900 [==============================] - 1s 1ms/step - loss: 5.7250e-05 - val_loss: 1.1300e-04 - lr: 1.0000e-04
Epoch 122/2000
900/900 [==============================] - 1s 1ms/step - loss: 6.4314e-05 - val_loss: 9.1217e-05 - lr: 1.0000e-04
In [ ]:
# plotting loss function
fig, ax = plt.subplots(figsize=(10, 7))

ax.plot(history.history['loss'], label='Training Loss')
ax.plot(history.history['val_loss'], label='Validation Loss')

ax.set(
    title='Processo de otimização das funções de perda',
    ylabel='Loss',
    xlabel='Epoch'
)

plt.legend()
plt.tight_layout()
plt.show()
In [ ]:
# true points and predicted points
fig, ax = plt.subplots(figsize=(15, 9), subplot_kw=dict(projection='3d'))

ax.plot_wireframe(x1, x2, y, linewidths=0.5, color='lightgrey')
ax.scatter(x_test[:,0], x_test[:,1], y_test, s=14, color='C0', label='Training data')
ax.scatter(x_test[:,0], x_test[:,1], mlp.predict(x_test), s=15, marker='^', color='C1', label='Test data')

ax.set(
    xlabel='$x_1$',
    ylabel='$x_2$',
    zlabel='$f(x_1, x_2)$'
)

plt.legend()
plt.tight_layout()
plt.show()
63/63 [==============================] - 0s 1ms/step
In [ ]:
mse = mean_squared_error(y_test, mlp.predict(x_test))
rmse = mean_squared_error(y_test, mlp.predict(x_test), squared=False)
mae = mean_absolute_error(y_test, mlp.predict(x_test))
63/63 [==============================] - 0s 903us/step
63/63 [==============================] - 0s 988us/step
63/63 [==============================] - 0s 763us/step
In [ ]:
print(f"Mean Squared Error: {mse}")
print(f"Root Mean Squared Error: {rmse}")
print(f"Mean Absolute Error: {mae}")
Mean Squared Error: 4.9327674554969855e-05
Root Mean Squared Error: 0.007023366326411421
Mean Absolute Error: 0.0053168037015980385

Questão 2¶

Considere o problema de classificação de padrões bidimensionais constituído neste caso de 5 padrões. A distribuição dos padrões tem como base um quadrado centrado na origem interceptando os eixos nos pontos +1 e -1 de cada eixo. Os pontos +1 e -1 de cada eixo são centros de quatro semicírculos que se interceptam no interior do quadrado originando as classes 1,2,3,4 e a outra classe corresponde as regiões de não interseção. Após gerar aleato- riamente os dados que venham formar estas distribuições de dados, selecione um conjunto de treinamento e um conjunto de validação com o rótulo de cada classe. Solucione este problema considerando uma rede perceptron de múltiplas camada. Apresente na solução a curva do erro médio de treinamento e a curva do erro médio de teste. Apresente também a matriz de confusão.

Semicircle equations¶

Gráfico das equações descritas abaixo:
In [ ]:
# Semicírculo azul
def c1(x, y):
    if (x + 1)**2 + y**2 <= 1:
      return 1
    else:
      return 0

#Sermicírculo verde
def c2(x, y):
    if (x - 1)**2 + y**2 <= 1:
      return 1
    else:
      return 0

#Semicírculo amarelo
def c3(x, y):
    if x**2 + (y + 1)**2 <= 1:
      return 1
    else:
      return 0

#Semicírculo vermelha=o
def c4(x, y):
    if x**2 + (y - 1)**2 <= 1:
      return 1
    else:
      return 0

Classificando os pontos¶

  • Classe 0: região de fora
  • Classe 1: região dentro dos semicírculos azul e vermelho
  • Classe 2: região dentro dos semicírculos verde e vermelho
  • Classe 3: região dentro dos semicírculos verde e amarelo
  • Classe 4: região dentro dos semicírculos azul e amarelo
In [ ]:
x, y = np.meshgrid(np.linspace(-1, 1, 100), np.linspace(-1, 1, 100))

points = np.vstack(list(zip(x.ravel(), y.ravel())))

lista = []
for x_i, y_i in points:
  if c1(x_i, y_i) + c4(x_i, y_i) == 2:
    lista.append(1)
  elif c2(x_i, y_i) + c4(x_i, y_i) == 2:
    lista.append(2)
  elif c2(x_i, y_i) + c3(x_i, y_i) == 2:
    lista.append(3)
  elif c1(x_i, y_i) + c3(x_i, y_i) == 2:
    lista.append(4)
  else:
    lista.append(0)

labels = np.array(lista)

Train test split¶

In [ ]:
x_train, x_test, y_train, y_test = train_test_split(points, labels, test_size=0.2, stratify=labels)
In [ ]:
print(f"shape x train: {x_train.shape}")
print(f"shape y train: {y_train.shape}")
print(f"shape x test: {x_test.shape}")
print(f"shape y test: {y_test.shape}")
shape x train: (8000, 2)
shape y train: (8000,)
shape x test: (2000, 2)
shape y test: (2000,)
In [ ]:
fig, ax = plt.subplots(ncols=3, figsize=(40, 12))

color1 = (181/255, 181/255, 181/255, 1.0)
color2 = (38/255, 118/255, 222/255, 1.0)
color3 = (38/255, 222/255, 118/255, 1.0)
color4 = (235/255, 91/255, 156/255, 1.0)
color5 = (240/255, 130/255, 44/255, 1.0)

colormap = np.array([color1, color2, color3, color4, color5])

dataset_scatter = ax[0].scatter(points[:,0], points[:,1], c=colormap[labels], marker='d')

ax[0].set(
    title='Dataset',
    xlabel='$x$',
    ylabel='$y$'
)

ax[1].scatter(x_train[:,0], x_train[:,1], c=colormap[y_train], marker='d')

ax[1].set(
    title='Training set',
    xlabel='$x$',
    ylabel='$y$'
)

ax[2].scatter(x_test[:,0], x_test[:,1], c=colormap[y_test], marker='d')

ax[2].set(
    title='Test set',
    xlabel='$x$',
    ylabel='$y$'
)

from matplotlib.patches import Patch
from matplotlib.lines import Line2D

legend_elements = [
    Line2D([0], [0], marker='o', color='w', label='Scatter', markerfacecolor=(181/255, 181/255, 181/255), markersize=15),
    Line2D([0], [0], marker='o', color='w', label='Scatter', markerfacecolor=(38/255, 118/255, 222/255), markersize=15),
    Line2D([0], [0], marker='o', color='w', label='Scatter', markerfacecolor=(38/255, 222/255, 118/255), markersize=15),
    Line2D([0], [0], marker='o', color='w', label='Scatter', markerfacecolor=(235/255, 91/255, 156/255), markersize=15),
    Line2D([0], [0], marker='o', color='w', label='Scatter', markerfacecolor=(240/255, 130/255, 44/255), markersize=15)
]

fig.legend(
    legend_elements,
    ['0', '1', '2', '3', '4'],
    loc='lower center',
    title='Classes'
)

plt.show()

Define the model¶

In [ ]:
mlp = Sequential([
    Dense(64, activation='relu', input_shape=(2,)),
    Dense(32, activation='relu'),
    Dense(16, activation='relu'),
    Dense(8, activation='relu'),
    Dense(5, activation='softmax')
])

mlp.compile(
    loss='sparse_categorical_crossentropy',
    optimizer='adam',
    metrics=['acc']
)


mlp.summary()
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_5 (Dense)             (None, 64)                192       
                                                                 
 dense_6 (Dense)             (None, 32)                2080      
                                                                 
 dense_7 (Dense)             (None, 16)                528       
                                                                 
 dense_8 (Dense)             (None, 8)                 136       
                                                                 
 dense_9 (Dense)             (None, 5)                 45        
                                                                 
=================================================================
Total params: 2,981
Trainable params: 2,981
Non-trainable params: 0
_________________________________________________________________

Training the model¶

In [ ]:
history = mlp.fit(
    x_train, y_train.reshape((-1,1)),
    validation_split=0.1,
    batch_size=10,
    epochs=2000,
    callbacks=[
        tf.keras.callbacks.EarlyStopping(monitor='loss', patience=10),
        tf.keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.1, patience=5, min_lr=0.0001)
    ]
)
Epoch 1/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.7271 - acc: 0.6861 - val_loss: 0.4313 - val_acc: 0.8350 - lr: 0.0010
Epoch 2/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.3175 - acc: 0.8826 - val_loss: 0.2762 - val_acc: 0.8975 - lr: 0.0010
Epoch 3/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.2327 - acc: 0.9156 - val_loss: 0.2305 - val_acc: 0.9125 - lr: 0.0010
Epoch 4/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.1948 - acc: 0.9292 - val_loss: 0.1951 - val_acc: 0.9162 - lr: 0.0010
Epoch 5/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1680 - acc: 0.9410 - val_loss: 0.1584 - val_acc: 0.9438 - lr: 0.0010
Epoch 6/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1588 - acc: 0.9375 - val_loss: 0.1548 - val_acc: 0.9375 - lr: 0.0010
Epoch 7/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1468 - acc: 0.9440 - val_loss: 0.1422 - val_acc: 0.9438 - lr: 0.0010
Epoch 8/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1337 - acc: 0.9468 - val_loss: 0.1154 - val_acc: 0.9550 - lr: 0.0010
Epoch 9/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1251 - acc: 0.9507 - val_loss: 0.1369 - val_acc: 0.9350 - lr: 0.0010
Epoch 10/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1272 - acc: 0.9499 - val_loss: 0.1201 - val_acc: 0.9575 - lr: 0.0010
Epoch 11/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1169 - acc: 0.9533 - val_loss: 0.1246 - val_acc: 0.9438 - lr: 0.0010
Epoch 12/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1170 - acc: 0.9517 - val_loss: 0.1711 - val_acc: 0.9212 - lr: 0.0010
Epoch 13/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1165 - acc: 0.9532 - val_loss: 0.1301 - val_acc: 0.9438 - lr: 0.0010
Epoch 14/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1164 - acc: 0.9524 - val_loss: 0.0993 - val_acc: 0.9588 - lr: 0.0010
Epoch 15/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1120 - acc: 0.9519 - val_loss: 0.1097 - val_acc: 0.9488 - lr: 0.0010
Epoch 16/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1062 - acc: 0.9569 - val_loss: 0.1492 - val_acc: 0.9337 - lr: 0.0010
Epoch 17/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1104 - acc: 0.9533 - val_loss: 0.0911 - val_acc: 0.9588 - lr: 0.0010
Epoch 18/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1109 - acc: 0.9550 - val_loss: 0.1208 - val_acc: 0.9525 - lr: 0.0010
Epoch 19/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1010 - acc: 0.9592 - val_loss: 0.1122 - val_acc: 0.9513 - lr: 0.0010
Epoch 20/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0980 - acc: 0.9599 - val_loss: 0.0863 - val_acc: 0.9638 - lr: 0.0010
Epoch 21/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0965 - acc: 0.9596 - val_loss: 0.0889 - val_acc: 0.9613 - lr: 0.0010
Epoch 22/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0994 - acc: 0.9597 - val_loss: 0.1672 - val_acc: 0.9262 - lr: 0.0010
Epoch 23/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0979 - acc: 0.9589 - val_loss: 0.1394 - val_acc: 0.9463 - lr: 0.0010
Epoch 24/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1035 - acc: 0.9558 - val_loss: 0.0798 - val_acc: 0.9638 - lr: 0.0010
Epoch 25/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0924 - acc: 0.9633 - val_loss: 0.0743 - val_acc: 0.9688 - lr: 0.0010
Epoch 26/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0937 - acc: 0.9601 - val_loss: 0.1888 - val_acc: 0.9425 - lr: 0.0010
Epoch 27/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.1012 - acc: 0.9583 - val_loss: 0.0916 - val_acc: 0.9600 - lr: 0.0010
Epoch 28/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0925 - acc: 0.9617 - val_loss: 0.0768 - val_acc: 0.9613 - lr: 0.0010
Epoch 29/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0933 - acc: 0.9632 - val_loss: 0.1094 - val_acc: 0.9513 - lr: 0.0010
Epoch 30/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0930 - acc: 0.9617 - val_loss: 0.0828 - val_acc: 0.9613 - lr: 0.0010
Epoch 31/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0574 - acc: 0.9829 - val_loss: 0.0603 - val_acc: 0.9737 - lr: 1.0000e-04
Epoch 32/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0554 - acc: 0.9847 - val_loss: 0.0665 - val_acc: 0.9712 - lr: 1.0000e-04
Epoch 33/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0549 - acc: 0.9856 - val_loss: 0.0587 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 34/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0556 - acc: 0.9844 - val_loss: 0.0582 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 35/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0544 - acc: 0.9847 - val_loss: 0.0599 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 36/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0542 - acc: 0.9847 - val_loss: 0.0614 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 37/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0544 - acc: 0.9851 - val_loss: 0.0634 - val_acc: 0.9737 - lr: 1.0000e-04
Epoch 38/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0536 - acc: 0.9846 - val_loss: 0.0570 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 39/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0530 - acc: 0.9854 - val_loss: 0.0511 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 40/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0536 - acc: 0.9858 - val_loss: 0.0554 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 41/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0527 - acc: 0.9862 - val_loss: 0.0612 - val_acc: 0.9737 - lr: 1.0000e-04
Epoch 42/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0527 - acc: 0.9856 - val_loss: 0.0540 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 43/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0519 - acc: 0.9861 - val_loss: 0.0597 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 44/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0513 - acc: 0.9869 - val_loss: 0.0547 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 45/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0514 - acc: 0.9861 - val_loss: 0.0577 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 46/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0511 - acc: 0.9850 - val_loss: 0.0548 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 47/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0518 - acc: 0.9847 - val_loss: 0.0580 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 48/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0509 - acc: 0.9861 - val_loss: 0.0565 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 49/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0502 - acc: 0.9868 - val_loss: 0.0508 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 50/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0505 - acc: 0.9854 - val_loss: 0.0611 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 51/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0492 - acc: 0.9862 - val_loss: 0.0535 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 52/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0493 - acc: 0.9858 - val_loss: 0.0552 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 53/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0495 - acc: 0.9868 - val_loss: 0.0559 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 54/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0487 - acc: 0.9856 - val_loss: 0.0526 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 55/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0490 - acc: 0.9867 - val_loss: 0.0565 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 56/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0484 - acc: 0.9864 - val_loss: 0.0520 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 57/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0481 - acc: 0.9882 - val_loss: 0.0494 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 58/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0476 - acc: 0.9882 - val_loss: 0.0550 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 59/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0478 - acc: 0.9872 - val_loss: 0.0577 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 60/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0480 - acc: 0.9864 - val_loss: 0.0521 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 61/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0476 - acc: 0.9872 - val_loss: 0.0478 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 62/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0477 - acc: 0.9865 - val_loss: 0.0485 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 63/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0471 - acc: 0.9857 - val_loss: 0.0510 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 64/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0464 - acc: 0.9890 - val_loss: 0.0604 - val_acc: 0.9737 - lr: 1.0000e-04
Epoch 65/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0470 - acc: 0.9878 - val_loss: 0.0538 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 66/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0463 - acc: 0.9878 - val_loss: 0.0523 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 67/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0462 - acc: 0.9868 - val_loss: 0.0496 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 68/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0458 - acc: 0.9871 - val_loss: 0.0516 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 69/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0458 - acc: 0.9875 - val_loss: 0.0488 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 70/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0460 - acc: 0.9871 - val_loss: 0.0504 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 71/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0462 - acc: 0.9868 - val_loss: 0.0586 - val_acc: 0.9725 - lr: 1.0000e-04
Epoch 72/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0450 - acc: 0.9879 - val_loss: 0.0515 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 73/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0450 - acc: 0.9882 - val_loss: 0.0560 - val_acc: 0.9725 - lr: 1.0000e-04
Epoch 74/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0450 - acc: 0.9890 - val_loss: 0.0512 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 75/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0452 - acc: 0.9881 - val_loss: 0.0480 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 76/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0441 - acc: 0.9889 - val_loss: 0.0443 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 77/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0446 - acc: 0.9868 - val_loss: 0.0481 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 78/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0442 - acc: 0.9876 - val_loss: 0.0562 - val_acc: 0.9737 - lr: 1.0000e-04
Epoch 79/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0439 - acc: 0.9885 - val_loss: 0.0434 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 80/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0443 - acc: 0.9867 - val_loss: 0.0488 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 81/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0437 - acc: 0.9893 - val_loss: 0.0470 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 82/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0436 - acc: 0.9885 - val_loss: 0.0456 - val_acc: 0.9875 - lr: 1.0000e-04
Epoch 83/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0432 - acc: 0.9885 - val_loss: 0.0515 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 84/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0443 - acc: 0.9871 - val_loss: 0.0486 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 85/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0431 - acc: 0.9886 - val_loss: 0.0446 - val_acc: 0.9875 - lr: 1.0000e-04
Epoch 86/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0430 - acc: 0.9886 - val_loss: 0.0506 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 87/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0430 - acc: 0.9886 - val_loss: 0.0503 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 88/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0430 - acc: 0.9890 - val_loss: 0.0529 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 89/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0423 - acc: 0.9887 - val_loss: 0.0550 - val_acc: 0.9712 - lr: 1.0000e-04
Epoch 90/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0426 - acc: 0.9889 - val_loss: 0.0499 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 91/2000
720/720 [==============================] - 3s 4ms/step - loss: 0.0415 - acc: 0.9906 - val_loss: 0.0428 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 92/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0422 - acc: 0.9882 - val_loss: 0.0472 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 93/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0422 - acc: 0.9883 - val_loss: 0.0434 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 94/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0425 - acc: 0.9883 - val_loss: 0.0464 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 95/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0415 - acc: 0.9903 - val_loss: 0.0567 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 96/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0416 - acc: 0.9903 - val_loss: 0.0423 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 97/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0418 - acc: 0.9889 - val_loss: 0.0464 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 98/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0410 - acc: 0.9906 - val_loss: 0.0502 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 99/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0414 - acc: 0.9896 - val_loss: 0.0430 - val_acc: 0.9887 - lr: 1.0000e-04
Epoch 100/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0408 - acc: 0.9894 - val_loss: 0.0437 - val_acc: 0.9875 - lr: 1.0000e-04
Epoch 101/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0413 - acc: 0.9885 - val_loss: 0.0478 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 102/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0408 - acc: 0.9893 - val_loss: 0.0418 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 103/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0403 - acc: 0.9896 - val_loss: 0.0454 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 104/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0406 - acc: 0.9881 - val_loss: 0.0420 - val_acc: 0.9887 - lr: 1.0000e-04
Epoch 105/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0410 - acc: 0.9883 - val_loss: 0.0502 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 106/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0410 - acc: 0.9889 - val_loss: 0.0394 - val_acc: 0.9900 - lr: 1.0000e-04
Epoch 107/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0401 - acc: 0.9887 - val_loss: 0.0515 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 108/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0400 - acc: 0.9900 - val_loss: 0.0409 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 109/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0405 - acc: 0.9892 - val_loss: 0.0411 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 110/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0407 - acc: 0.9879 - val_loss: 0.0589 - val_acc: 0.9688 - lr: 1.0000e-04
Epoch 111/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0392 - acc: 0.9896 - val_loss: 0.0581 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 112/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0394 - acc: 0.9896 - val_loss: 0.0492 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 113/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0395 - acc: 0.9890 - val_loss: 0.0449 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 114/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0396 - acc: 0.9878 - val_loss: 0.0389 - val_acc: 0.9875 - lr: 1.0000e-04
Epoch 115/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0393 - acc: 0.9901 - val_loss: 0.0464 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 116/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0395 - acc: 0.9893 - val_loss: 0.0502 - val_acc: 0.9725 - lr: 1.0000e-04
Epoch 117/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0390 - acc: 0.9885 - val_loss: 0.0448 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 118/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0385 - acc: 0.9899 - val_loss: 0.0396 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 119/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0391 - acc: 0.9900 - val_loss: 0.0409 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 120/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0386 - acc: 0.9914 - val_loss: 0.0514 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 121/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0392 - acc: 0.9889 - val_loss: 0.0431 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 122/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0389 - acc: 0.9894 - val_loss: 0.0447 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 123/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0382 - acc: 0.9894 - val_loss: 0.0507 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 124/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0384 - acc: 0.9899 - val_loss: 0.0406 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 125/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0378 - acc: 0.9901 - val_loss: 0.0470 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 126/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0384 - acc: 0.9889 - val_loss: 0.0460 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 127/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0383 - acc: 0.9889 - val_loss: 0.0390 - val_acc: 0.9887 - lr: 1.0000e-04
Epoch 128/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0381 - acc: 0.9875 - val_loss: 0.0396 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 129/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0381 - acc: 0.9894 - val_loss: 0.0554 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 130/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0378 - acc: 0.9889 - val_loss: 0.0478 - val_acc: 0.9725 - lr: 1.0000e-04
Epoch 131/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0373 - acc: 0.9903 - val_loss: 0.0416 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 132/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0383 - acc: 0.9893 - val_loss: 0.0394 - val_acc: 0.9900 - lr: 1.0000e-04
Epoch 133/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0372 - acc: 0.9899 - val_loss: 0.0383 - val_acc: 0.9900 - lr: 1.0000e-04
Epoch 134/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0373 - acc: 0.9901 - val_loss: 0.0469 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 135/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0371 - acc: 0.9897 - val_loss: 0.0401 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 136/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0377 - acc: 0.9890 - val_loss: 0.0436 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 137/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0368 - acc: 0.9894 - val_loss: 0.0405 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 138/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0363 - acc: 0.9897 - val_loss: 0.0450 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 139/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0371 - acc: 0.9900 - val_loss: 0.0392 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 140/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0370 - acc: 0.9899 - val_loss: 0.0474 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 141/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0363 - acc: 0.9892 - val_loss: 0.0386 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 142/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0371 - acc: 0.9889 - val_loss: 0.0436 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 143/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0365 - acc: 0.9908 - val_loss: 0.0487 - val_acc: 0.9737 - lr: 1.0000e-04
Epoch 144/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0366 - acc: 0.9892 - val_loss: 0.0380 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 145/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0363 - acc: 0.9900 - val_loss: 0.0393 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 146/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0371 - acc: 0.9899 - val_loss: 0.0374 - val_acc: 0.9887 - lr: 1.0000e-04
Epoch 147/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0362 - acc: 0.9894 - val_loss: 0.0489 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 148/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0359 - acc: 0.9894 - val_loss: 0.0459 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 149/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0360 - acc: 0.9899 - val_loss: 0.0328 - val_acc: 0.9937 - lr: 1.0000e-04
Epoch 150/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0357 - acc: 0.9892 - val_loss: 0.0459 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 151/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0351 - acc: 0.9896 - val_loss: 0.0425 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 152/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0365 - acc: 0.9885 - val_loss: 0.0402 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 153/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0350 - acc: 0.9906 - val_loss: 0.0390 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 154/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0362 - acc: 0.9894 - val_loss: 0.0378 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 155/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0365 - acc: 0.9878 - val_loss: 0.0366 - val_acc: 0.9900 - lr: 1.0000e-04
Epoch 156/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0352 - acc: 0.9907 - val_loss: 0.0386 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 157/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0351 - acc: 0.9897 - val_loss: 0.0435 - val_acc: 0.9762 - lr: 1.0000e-04
Epoch 158/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0350 - acc: 0.9910 - val_loss: 0.0530 - val_acc: 0.9700 - lr: 1.0000e-04
Epoch 159/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0345 - acc: 0.9904 - val_loss: 0.0379 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 160/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0359 - acc: 0.9886 - val_loss: 0.0375 - val_acc: 0.9887 - lr: 1.0000e-04
Epoch 161/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0347 - acc: 0.9906 - val_loss: 0.0394 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 162/2000
720/720 [==============================] - 3s 4ms/step - loss: 0.0345 - acc: 0.9900 - val_loss: 0.0400 - val_acc: 0.9875 - lr: 1.0000e-04
Epoch 163/2000
720/720 [==============================] - 3s 4ms/step - loss: 0.0346 - acc: 0.9907 - val_loss: 0.0372 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 164/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0348 - acc: 0.9892 - val_loss: 0.0340 - val_acc: 0.9925 - lr: 1.0000e-04
Epoch 165/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0346 - acc: 0.9897 - val_loss: 0.0386 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 166/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0356 - acc: 0.9887 - val_loss: 0.0398 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 167/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0347 - acc: 0.9904 - val_loss: 0.0436 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 168/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0345 - acc: 0.9897 - val_loss: 0.0350 - val_acc: 0.9875 - lr: 1.0000e-04
Epoch 169/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0333 - acc: 0.9899 - val_loss: 0.0433 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 170/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0345 - acc: 0.9890 - val_loss: 0.0370 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 171/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0343 - acc: 0.9897 - val_loss: 0.0417 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 172/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0342 - acc: 0.9896 - val_loss: 0.0353 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 173/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0348 - acc: 0.9904 - val_loss: 0.0407 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 174/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0334 - acc: 0.9912 - val_loss: 0.0405 - val_acc: 0.9812 - lr: 1.0000e-04
Epoch 175/2000
720/720 [==============================] - 3s 4ms/step - loss: 0.0338 - acc: 0.9912 - val_loss: 0.0392 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 176/2000
720/720 [==============================] - 4s 6ms/step - loss: 0.0341 - acc: 0.9892 - val_loss: 0.0326 - val_acc: 0.9887 - lr: 1.0000e-04
Epoch 177/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0339 - acc: 0.9907 - val_loss: 0.0433 - val_acc: 0.9775 - lr: 1.0000e-04
Epoch 178/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0328 - acc: 0.9896 - val_loss: 0.0374 - val_acc: 0.9850 - lr: 1.0000e-04
Epoch 179/2000
720/720 [==============================] - 5s 8ms/step - loss: 0.0339 - acc: 0.9889 - val_loss: 0.0330 - val_acc: 0.9887 - lr: 1.0000e-04
Epoch 180/2000
720/720 [==============================] - 3s 4ms/step - loss: 0.0330 - acc: 0.9907 - val_loss: 0.0345 - val_acc: 0.9912 - lr: 1.0000e-04
Epoch 181/2000
720/720 [==============================] - 4s 6ms/step - loss: 0.0335 - acc: 0.9903 - val_loss: 0.0344 - val_acc: 0.9887 - lr: 1.0000e-04
Epoch 182/2000
720/720 [==============================] - 4s 5ms/step - loss: 0.0325 - acc: 0.9915 - val_loss: 0.0348 - val_acc: 0.9875 - lr: 1.0000e-04
Epoch 183/2000
720/720 [==============================] - 2s 3ms/step - loss: 0.0334 - acc: 0.9904 - val_loss: 0.0411 - val_acc: 0.9787 - lr: 1.0000e-04
Epoch 184/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0336 - acc: 0.9906 - val_loss: 0.0375 - val_acc: 0.9837 - lr: 1.0000e-04
Epoch 185/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0334 - acc: 0.9903 - val_loss: 0.0397 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 186/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0329 - acc: 0.9907 - val_loss: 0.0452 - val_acc: 0.9750 - lr: 1.0000e-04
Epoch 187/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0334 - acc: 0.9908 - val_loss: 0.0334 - val_acc: 0.9862 - lr: 1.0000e-04
Epoch 188/2000
720/720 [==============================] - 3s 5ms/step - loss: 0.0332 - acc: 0.9906 - val_loss: 0.0378 - val_acc: 0.9825 - lr: 1.0000e-04
Epoch 189/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0329 - acc: 0.9904 - val_loss: 0.0425 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 190/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0328 - acc: 0.9901 - val_loss: 0.0316 - val_acc: 0.9912 - lr: 1.0000e-04
Epoch 191/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0330 - acc: 0.9911 - val_loss: 0.0463 - val_acc: 0.9800 - lr: 1.0000e-04
Epoch 192/2000
720/720 [==============================] - 2s 2ms/step - loss: 0.0326 - acc: 0.9908 - val_loss: 0.0391 - val_acc: 0.9825 - lr: 1.0000e-04

Evaluation¶

In [ ]:
fig, ax = plt.subplots(ncols=2, figsize=(16, 6))

ax[0].plot(history.history['loss'], label='Training loss')
ax[0].plot(history.history['val_loss'], label='Validation loss')

ax[0].legend()
ax[0].set(
    ylabel='Loss',
    xlabel='Epoch'
)

ax[1].plot(history.history['acc'], label='Training accuracy')
ax[1].plot(history.history['val_acc'], label='Validation accuracy')

ax[1].set(
    ylabel='Accuracy',
    xlabel='Epoch'
)

plt.legend()
plt.tight_layout()
plt.show()
In [ ]:
predictions = [np.argmax(p) for p in mlp.predict(x_test)]
63/63 [==============================] - 0s 1ms/step
In [ ]:
print(classification_report(y_test, predictions))
              precision    recall  f1-score   support

           0       0.99      0.98      0.98       884
           1       0.98      0.99      0.98       279
           2       1.00      0.98      0.99       279
           3       0.98      1.00      0.99       279
           4       0.99      1.00      0.99       279

    accuracy                           0.99      2000
   macro avg       0.99      0.99      0.99      2000
weighted avg       0.99      0.99      0.99      2000

In [ ]:
fig, ax = plt.subplots(figsize=(14, 8))

ConfusionMatrixDisplay(confusion_matrix(predictions, y_test)).plot(values_format='.0f', ax=ax)

ax.set(
    title='Confusion Matrix',
    xlabel='True Labels',
    ylabel='Predicted Labels'
)

plt.tight_layout()
plt.show()
In [ ]:
fig, ax = plt.subplots(figsize=(12, 12))

ax.scatter(x_test[:,0], x_test[:,1], c=y_test, marker='d')

ax.scatter(x_test[:,0], x_test[:,1], c=predictions, marker='x')

ax.set(
    title='Predictions',
    xlabel='$x$',
    ylabel='$y$'
)


plt.show()

Questão 3¶

Considere uma rede deep learning convolutiva (treinada) aplicada à classificação de padrões em imagens. A base de dados considerada é a CIFAR-10 (pesquise). A referida base de dados consiste de 60 mil imagens coloridas de 32x32 pixels, com 50 mil para treino e 10 mil para teste. As imagens estão divididas em 10 classes, a saber: avião, navio, cami- nhão, automóvel, sapo, pássaro, cachorro, gato, cavalo e cervo. Cada imagem possui apenas um dos objetos da classe de interesse, podendo estar parcialmente obstruído por outros ob- jetos que não pertençam a esse conjunto. Apresente os resultados da classificação em uma matriz de confusão.

Este trabalho foi baseado no trabalho do CODERONIN1 do Kaggle Link: https://www.kaggle.com/code/adi160/cifar-10-keras-transfer-learning

In [ ]:
from sklearn.utils.multiclass import unique_labels
import os
import matplotlib.image as mpimg
import seaborn as sns
from keras.layers import Flatten,Dense,BatchNormalization,Activation,Dropout
from keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping

# library for transfer learning
from keras.applications import VGG19,ResNet50

# data augumentation
from keras.preprocessing.image import ImageDataGenerator

# import dataset
from keras.datasets import cifar10

Load dataset¶

In [ ]:
# load dataset and divide in train and teste datasets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
In [ ]:
print(f"shape x train: {x_train.shape}")
print(f"shape y train: {y_train.shape}")
print(f"shape x test: {x_test.shape}")
print(f"shape y test: {y_test.shape}")
shape x train: (50000, 32, 32, 3)
shape y train: (50000, 1)
shape x test: (10000, 32, 32, 3)
shape y test: (10000, 1)
In [ ]:
y_train=to_categorical(y_train)
y_test=to_categorical(y_test)
In [ ]:
print(f"shape x train: {x_train.shape}")
print(f"shape y train: {y_train.shape}")
print(f"shape x test: {x_test.shape}")
print(f"shape y test: {y_test.shape}")
shape x train: (50000, 32, 32, 3)
shape y train: (50000, 10)
shape x test: (10000, 32, 32, 3)
shape y test: (10000, 10)

Data Augmentation¶

In [ ]:
# Let's instantiate the object
train_generator = ImageDataGenerator(
                                    rotation_range=2, 
                                    horizontal_flip=True,
                                    zoom_range=.1 )

test_generator = ImageDataGenerator(
                                    rotation_range=2, 
                                    horizontal_flip= True,
                                    zoom_range=.1)
In [ ]:
#Fit the augmentation method to the data
train_generator.fit(x_train)
test_generator.fit(x_test)

Transfer Learning - VGG19¶

In [ ]:
lrr= ReduceLROnPlateau(
                       monitor='val_accuracy', #Metric to be measured
                       factor=.01, #Factor by which learning rate will be reduced
                       patience=3,  #No. of epochs after which if there is no improvement in the val_acc, the learning rate is reduced
                       min_lr=1e-5) #The minimum learning rate 

Importing¶

In [ ]:
# The first base model used is VGG19. 
# The pretrained weights from the imagenet challenge are used
base_vgg19 = VGG19(include_top=False,
                   input_shape=(32,32,3),
                   classes=y_train.shape[1],
                   weights='imagenet')

base_vgg19.summary()
Model: "vgg19"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_4 (InputLayer)        [(None, 32, 32, 3)]       0         
                                                                 
 block1_conv1 (Conv2D)       (None, 32, 32, 64)        1792      
                                                                 
 block1_conv2 (Conv2D)       (None, 32, 32, 64)        36928     
                                                                 
 block1_pool (MaxPooling2D)  (None, 16, 16, 64)        0         
                                                                 
 block2_conv1 (Conv2D)       (None, 16, 16, 128)       73856     
                                                                 
 block2_conv2 (Conv2D)       (None, 16, 16, 128)       147584    
                                                                 
 block2_pool (MaxPooling2D)  (None, 8, 8, 128)         0         
                                                                 
 block3_conv1 (Conv2D)       (None, 8, 8, 256)         295168    
                                                                 
 block3_conv2 (Conv2D)       (None, 8, 8, 256)         590080    
                                                                 
 block3_conv3 (Conv2D)       (None, 8, 8, 256)         590080    
                                                                 
 block3_conv4 (Conv2D)       (None, 8, 8, 256)         590080    
                                                                 
 block3_pool (MaxPooling2D)  (None, 4, 4, 256)         0         
                                                                 
 block4_conv1 (Conv2D)       (None, 4, 4, 512)         1180160   
                                                                 
 block4_conv2 (Conv2D)       (None, 4, 4, 512)         2359808   
                                                                 
 block4_conv3 (Conv2D)       (None, 4, 4, 512)         2359808   
                                                                 
 block4_conv4 (Conv2D)       (None, 4, 4, 512)         2359808   
                                                                 
 block4_pool (MaxPooling2D)  (None, 2, 2, 512)         0         
                                                                 
 block5_conv1 (Conv2D)       (None, 2, 2, 512)         2359808   
                                                                 
 block5_conv2 (Conv2D)       (None, 2, 2, 512)         2359808   
                                                                 
 block5_conv3 (Conv2D)       (None, 2, 2, 512)         2359808   
                                                                 
 block5_conv4 (Conv2D)       (None, 2, 2, 512)         2359808   
                                                                 
 block5_pool (MaxPooling2D)  (None, 1, 1, 512)         0         
                                                                 
=================================================================
Total params: 20,024,384
Trainable params: 20,024,384
Non-trainable params: 0
_________________________________________________________________
In [ ]:
vgg19 = Sequential()
vgg19.add(base_vgg19) 
vgg19.add(Flatten()) 

#Add the Dense layers along with activation and batch normalization
vgg19.add(Dense(1024,activation=('relu'),input_dim=512))
vgg19.add(Dense(512,activation=('relu'))) 
vgg19.add(Dense(256,activation=('relu'))) 
#model_1.add(Dropout(.3)) 
vgg19.add(Dense(128,activation=('relu')))
#model_1.add(Dropout(.2))
vgg19.add(Dense(10,activation=('softmax'))) 

vgg19.summary()
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 vgg19 (Functional)          (None, 1, 1, 512)         20024384  
                                                                 
 flatten_3 (Flatten)         (None, 512)               0         
                                                                 
 dense_15 (Dense)            (None, 1024)              525312    
                                                                 
 dense_16 (Dense)            (None, 512)               524800    
                                                                 
 dense_17 (Dense)            (None, 256)               131328    
                                                                 
 dense_18 (Dense)            (None, 128)               32896     
                                                                 
 dense_19 (Dense)            (None, 10)                1290      
                                                                 
=================================================================
Total params: 21,240,010
Trainable params: 21,240,010
Non-trainable params: 0
_________________________________________________________________

Training¶

In [ ]:
# hyperparameters
batch_size= 100
epochs=50

learn_rate=.001

sgd=SGD(lr=learn_rate,momentum=.9,nesterov=False)
adam=Adam(lr=learn_rate, beta_1=0.9, beta_2=0.999, 
          epsilon=None, decay=0.0, amsgrad=False)

vgg19.compile(optimizer=sgd,
                    loss='categorical_crossentropy',
                    metrics=['accuracy'])
In [ ]:
vgg19.fit_generator(train_generator.flow(x_train,y_train,batch_size=batch_size),
                      epochs=epochs,
                      steps_per_epoch=x_train.shape[0]//batch_size,
                      validation_data=test_generator.flow(x_test,y_test,batch_size=batch_size),validation_steps=250,
                      callbacks=[lrr],verbose=1)
Epoch 1/50
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:5: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
  """
500/500 [==============================] - ETA: 0s - loss: 1.6137 - accuracy: 0.4008
WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 250 batches). You may need to use the repeat() function when building your dataset.
500/500 [==============================] - 37s 73ms/step - loss: 1.6137 - accuracy: 0.4008 - val_loss: 1.0790 - val_accuracy: 0.6240 - lr: 0.0010
Epoch 2/50
500/500 [==============================] - ETA: 0s - loss: 0.8359 - accuracy: 0.7123
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.8359 - accuracy: 0.7123 - lr: 0.0010
Epoch 3/50
500/500 [==============================] - ETA: 0s - loss: 0.6465 - accuracy: 0.7799
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 33s 65ms/step - loss: 0.6465 - accuracy: 0.7799 - lr: 0.0010
Epoch 4/50
500/500 [==============================] - ETA: 0s - loss: 0.5492 - accuracy: 0.8131
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.5492 - accuracy: 0.8131 - lr: 0.0010
Epoch 5/50
500/500 [==============================] - ETA: 0s - loss: 0.4832 - accuracy: 0.8365
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.4832 - accuracy: 0.8365 - lr: 0.0010
Epoch 6/50
500/500 [==============================] - ETA: 0s - loss: 0.4242 - accuracy: 0.8548
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.4242 - accuracy: 0.8548 - lr: 0.0010
Epoch 7/50
500/500 [==============================] - ETA: 0s - loss: 0.3861 - accuracy: 0.8667
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.3861 - accuracy: 0.8667 - lr: 0.0010
Epoch 8/50
500/500 [==============================] - ETA: 0s - loss: 0.3509 - accuracy: 0.8775
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 35s 70ms/step - loss: 0.3509 - accuracy: 0.8775 - lr: 0.0010
Epoch 9/50
500/500 [==============================] - ETA: 0s - loss: 0.3144 - accuracy: 0.8912
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.3144 - accuracy: 0.8912 - lr: 0.0010
Epoch 10/50
500/500 [==============================] - ETA: 0s - loss: 0.2832 - accuracy: 0.9022
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.2832 - accuracy: 0.9022 - lr: 0.0010
Epoch 11/50
500/500 [==============================] - ETA: 0s - loss: 0.2564 - accuracy: 0.9116
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.2564 - accuracy: 0.9116 - lr: 0.0010
Epoch 12/50
500/500 [==============================] - ETA: 0s - loss: 0.2257 - accuracy: 0.9214
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.2257 - accuracy: 0.9214 - lr: 0.0010
Epoch 13/50
500/500 [==============================] - ETA: 0s - loss: 0.2134 - accuracy: 0.9262
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.2134 - accuracy: 0.9262 - lr: 0.0010
Epoch 14/50
500/500 [==============================] - ETA: 0s - loss: 0.1928 - accuracy: 0.9343
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.1928 - accuracy: 0.9343 - lr: 0.0010
Epoch 15/50
500/500 [==============================] - ETA: 0s - loss: 0.1731 - accuracy: 0.9400
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.1731 - accuracy: 0.9400 - lr: 0.0010
Epoch 16/50
500/500 [==============================] - ETA: 0s - loss: 0.1550 - accuracy: 0.9474
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.1550 - accuracy: 0.9474 - lr: 0.0010
Epoch 17/50
500/500 [==============================] - ETA: 0s - loss: 0.1412 - accuracy: 0.9514
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.1412 - accuracy: 0.9514 - lr: 0.0010
Epoch 18/50
500/500 [==============================] - ETA: 0s - loss: 0.1307 - accuracy: 0.9560
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.1307 - accuracy: 0.9560 - lr: 0.0010
Epoch 19/50
500/500 [==============================] - ETA: 0s - loss: 0.1171 - accuracy: 0.9605
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.1171 - accuracy: 0.9605 - lr: 0.0010
Epoch 20/50
500/500 [==============================] - ETA: 0s - loss: 0.1071 - accuracy: 0.9633
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.1071 - accuracy: 0.9633 - lr: 0.0010
Epoch 21/50
500/500 [==============================] - ETA: 0s - loss: 0.1004 - accuracy: 0.9654
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.1004 - accuracy: 0.9654 - lr: 0.0010
Epoch 22/50
500/500 [==============================] - ETA: 0s - loss: 0.0919 - accuracy: 0.9687
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0919 - accuracy: 0.9687 - lr: 0.0010
Epoch 23/50
500/500 [==============================] - ETA: 0s - loss: 0.0862 - accuracy: 0.9704
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 34s 68ms/step - loss: 0.0862 - accuracy: 0.9704 - lr: 0.0010
Epoch 24/50
500/500 [==============================] - ETA: 0s - loss: 0.0761 - accuracy: 0.9743
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0761 - accuracy: 0.9743 - lr: 0.0010
Epoch 25/50
500/500 [==============================] - ETA: 0s - loss: 0.0703 - accuracy: 0.9761
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0703 - accuracy: 0.9761 - lr: 0.0010
Epoch 26/50
500/500 [==============================] - ETA: 0s - loss: 0.0664 - accuracy: 0.9768
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0664 - accuracy: 0.9768 - lr: 0.0010
Epoch 27/50
500/500 [==============================] - ETA: 0s - loss: 0.0601 - accuracy: 0.9798
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0601 - accuracy: 0.9798 - lr: 0.0010
Epoch 28/50
500/500 [==============================] - ETA: 0s - loss: 0.0578 - accuracy: 0.9802
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0578 - accuracy: 0.9802 - lr: 0.0010
Epoch 29/50
500/500 [==============================] - ETA: 0s - loss: 0.0568 - accuracy: 0.9805
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0568 - accuracy: 0.9805 - lr: 0.0010
Epoch 30/50
500/500 [==============================] - ETA: 0s - loss: 0.0488 - accuracy: 0.9842
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0488 - accuracy: 0.9842 - lr: 0.0010
Epoch 31/50
500/500 [==============================] - ETA: 0s - loss: 0.0475 - accuracy: 0.9842
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0475 - accuracy: 0.9842 - lr: 0.0010
Epoch 32/50
500/500 [==============================] - ETA: 0s - loss: 0.0455 - accuracy: 0.9844
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0455 - accuracy: 0.9844 - lr: 0.0010
Epoch 33/50
500/500 [==============================] - ETA: 0s - loss: 0.0433 - accuracy: 0.9855
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0433 - accuracy: 0.9855 - lr: 0.0010
Epoch 34/50
500/500 [==============================] - ETA: 0s - loss: 0.0404 - accuracy: 0.9866
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0404 - accuracy: 0.9866 - lr: 0.0010
Epoch 35/50
500/500 [==============================] - ETA: 0s - loss: 0.0433 - accuracy: 0.9849
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0433 - accuracy: 0.9849 - lr: 0.0010
Epoch 36/50
500/500 [==============================] - ETA: 0s - loss: 0.0367 - accuracy: 0.9876
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0367 - accuracy: 0.9876 - lr: 0.0010
Epoch 37/50
500/500 [==============================] - ETA: 0s - loss: 0.0306 - accuracy: 0.9895
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0306 - accuracy: 0.9895 - lr: 0.0010
Epoch 38/50
500/500 [==============================] - ETA: 0s - loss: 0.0300 - accuracy: 0.9896
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0300 - accuracy: 0.9896 - lr: 0.0010
Epoch 39/50
500/500 [==============================] - ETA: 0s - loss: 0.0328 - accuracy: 0.9892
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0328 - accuracy: 0.9892 - lr: 0.0010
Epoch 40/50
500/500 [==============================] - ETA: 0s - loss: 0.0263 - accuracy: 0.9912
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0263 - accuracy: 0.9912 - lr: 0.0010
Epoch 41/50
500/500 [==============================] - ETA: 0s - loss: 0.0276 - accuracy: 0.9909
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0276 - accuracy: 0.9909 - lr: 0.0010
Epoch 42/50
500/500 [==============================] - ETA: 0s - loss: 0.0274 - accuracy: 0.9908
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 33s 65ms/step - loss: 0.0274 - accuracy: 0.9908 - lr: 0.0010
Epoch 43/50
500/500 [==============================] - ETA: 0s - loss: 0.0267 - accuracy: 0.9907
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0267 - accuracy: 0.9907 - lr: 0.0010
Epoch 44/50
500/500 [==============================] - ETA: 0s - loss: 0.0308 - accuracy: 0.9895
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0308 - accuracy: 0.9895 - lr: 0.0010
Epoch 45/50
500/500 [==============================] - ETA: 0s - loss: 0.0257 - accuracy: 0.9914
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0257 - accuracy: 0.9914 - lr: 0.0010
Epoch 46/50
500/500 [==============================] - ETA: 0s - loss: 0.0220 - accuracy: 0.9930
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0220 - accuracy: 0.9930 - lr: 0.0010
Epoch 47/50
500/500 [==============================] - ETA: 0s - loss: 0.0216 - accuracy: 0.9928
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 65ms/step - loss: 0.0216 - accuracy: 0.9928 - lr: 0.0010
Epoch 48/50
500/500 [==============================] - ETA: 0s - loss: 0.0211 - accuracy: 0.9928
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0211 - accuracy: 0.9928 - lr: 0.0010
Epoch 49/50
500/500 [==============================] - ETA: 0s - loss: 0.0216 - accuracy: 0.9926
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 33s 65ms/step - loss: 0.0216 - accuracy: 0.9926 - lr: 0.0010
Epoch 50/50
500/500 [==============================] - ETA: 0s - loss: 0.0230 - accuracy: 0.9920
WARNING:tensorflow:Learning rate reduction is conditioned on metric `val_accuracy` which is not available. Available metrics are: loss,accuracy,lr
500/500 [==============================] - 32s 64ms/step - loss: 0.0230 - accuracy: 0.9920 - lr: 0.0010
Out[ ]:
<keras.callbacks.History at 0x7f660232e310>
In [ ]:
fig, ax = plt.subplots(ncols=2, figsize=(16, 6))

ax[0].plot(vgg19.history.history['loss'], label='Training loss')
ax[0].plot(vgg19.history.history['val_loss'], label='Validation loss')

ax[0].legend()
ax[0].set(
    ylabel='Loss',
    xlabel='Epoch'
)

ax[1].plot(vgg19.history.history['accuracy'], label='Training accuracy')
ax[1].plot(vgg19.history.history['val_accuracy'], label='Validation accuracy')

ax[1].set(
    ylabel='Accuracy',
    xlabel='Epoch'
)

plt.legend()
plt.tight_layout()
plt.show()

Evaluation¶

In [ ]:
def plot_confusion_matrix(y_true, y_pred, classes,
                          normalize=False,
                          title=None,
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """
    if not title:
        if normalize:
            title = 'Normalized confusion matrix'
        else:
            title = 'Confusion matrix, without normalization'

    # Compute confusion matrix
    cm = confusion_matrix(y_true, y_pred)
    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
        print("Normalized confusion matrix")
    else:
        print('Confusion matrix, without normalization')

#     print(cm)

    fig, ax = plt.subplots(figsize=(7,7))
    im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
    ax.figure.colorbar(im, ax=ax)
    # We want to show all ticks...
    ax.set(xticks=np.arange(cm.shape[1]),
           yticks=np.arange(cm.shape[0]),
           # ... and label them with the respective list entries
           xticklabels=classes, yticklabels=classes,
           title=title,
           ylabel='True label',
           xlabel='Predicted label')

    # Rotate the tick labels and set their alignment.
    plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
             rotation_mode="anchor")
    # Loop over data dimensions and create text annotations.
    fmt = '.2f' if normalize else 'd'
    thresh = cm.max() / 2.
    for i in range(cm.shape[0]):
        for j in range(cm.shape[1]):
            ax.text(j, i, format(cm[i, j], fmt),
                    ha="center", va="center",
                    color="white" if cm[i, j] > thresh else "black")
    fig.tight_layout()
    return ax


np.set_printoptions(precision=2)
In [ ]:
y_pred = vgg19.predict(x_test)
y_pred = np.argmax(y_pred,axis=1)
y_true=np.argmax(y_test,axis=1)

#Compute the confusion matrix
confusion_mtx = confusion_matrix(y_true,y_pred)
313/313 [==============================] - 4s 10ms/step
In [ ]:
class_names=['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
In [ ]:
# Plot non-normalized confusion matrix
plot_confusion_matrix(y_true, y_pred, classes=class_names,
                      title='Confusion matrix, without normalization')
Confusion matrix, without normalization
Out[ ]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f64c4167a10>
In [ ]:
# Plot normalized confusion matrix
plot_confusion_matrix(y_true, y_pred, classes=class_names, normalize=True,
                      title='Normalized confusion matrix')
# plt.show()
Normalized confusion matrix
Out[ ]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f64ba2477d0>
In [ ]:
print(classification_report(y_true, y_pred, target_names=class_names))
              precision    recall  f1-score   support

    airplane       0.89      0.89      0.89      1000
  automobile       0.91      0.95      0.93      1000
        bird       0.90      0.80      0.85      1000
         cat       0.77      0.70      0.74      1000
        deer       0.86      0.87      0.86      1000
         dog       0.78      0.82      0.80      1000
        frog       0.89      0.90      0.90      1000
       horse       0.89      0.90      0.89      1000
        ship       0.93      0.93      0.93      1000
       truck       0.87      0.94      0.90      1000

    accuracy                           0.87     10000
   macro avg       0.87      0.87      0.87     10000
weighted avg       0.87      0.87      0.87     10000

Questão 4¶

Utilize a rede neural perceptron de múltiplas camadas do tipo NARX (rede recorrente) para fazer a predição de um passo $x^{(n+1)}$ da série temporal $x(n) = 1 + cos(n + cos^2(n))$, n=0,1,2,3,.... Gere inicialmente um conjunto de amostras para o treinamento, definindo o erro de predição como $e^{(n+1)}=x(n+1)-x^{(n+1)}$. Avalie o desempenho mostrando a curva a série temporal, a curva de predição o e curva do erro de predição.

In [2]:
from tensorflow.keras.layers import Dense, Reshape, Flatten, Dropout, Activation, BatchNormalization, LSTM, Embedding, Input
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
In [3]:
# defining function
def f(x):
  return 1 + np.cos(x + (np.cos(x))**2)
In [4]:
x = np.linspace(0, 100, 10000)
y = [f(i) for i in x]
In [5]:
# plot data
points = 10000
plt.plot(x[:points], y[:points])
plt.show()
In [6]:
# splitting data into training and testing
test_size = 2000
x_train = x[:-test_size]
y_train = y[:-test_size]
x_test = x[-test_size:]
y_test = y[-test_size:]
In [9]:
# plot data
fig, axes = plt.subplots(ncols=1, figsize=(24, 5))
axes.plot(x_train, y_train, label='train')
axes.plot(x_test, y_test, label='test')
axes.set_title('Data')
axes.legend()
plt.show()
In [11]:
sequence_x = list(TimeseriesGenerator(x_train, x_train, 4, batch_size=1))
sequence_y = list(TimeseriesGenerator(x_train[4:], x_train[4:], 3, batch_size=1))

train_seqs = []
y_train = []
for (x_seq, next_x), (next_seq, _) in zip(sequence_x, sequence_y):
    seq = np.append(x_seq.reshape(4,), next_seq.reshape(3, ))
    
    train_seqs.append(seq)
    y_train.append(next_x)

train_seqs = np.array(train_seqs)
y_train = np.array(y_train)
In [12]:
sequence_x = list(TimeseriesGenerator(x_test, x_test, 4, batch_size=1))
sequence_y = list(TimeseriesGenerator(x_test[4:], x_test[4:], 3, batch_size=1))

test_seqs = []
y_test = []
for (x_seq, next_x), (next_seq, _) in zip(sequence_x, sequence_y):
    seq = np.append(x_seq.reshape(4,), next_seq.reshape(3, ))
    
    test_seqs.append(seq)
    y_test.append(next_x)

test_seqs = np.array(test_seqs)
y_test = np.array(y_test)
In [15]:
# building the model
model = Sequential([
    LSTM(128, input_shape=(7, 1), return_sequences=True),
    LSTM(64, return_sequences=True),
    LSTM(32),
    Dense(1)
])

model.compile(loss="mean_absolute_error", optimizer="rmsprop")

model.summary()
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 lstm_9 (LSTM)               (None, 7, 128)            66560     
                                                                 
 lstm_10 (LSTM)              (None, 7, 64)             49408     
                                                                 
 lstm_11 (LSTM)              (None, 32)                12416     
                                                                 
 dense_3 (Dense)             (None, 1)                 33        
                                                                 
=================================================================
Total params: 128,417
Trainable params: 128,417
Non-trainable params: 0
_________________________________________________________________
In [16]:
# Training the model
history = model.fit(
    train_seqs, y_train,
    validation_split=0.1,
    batch_size=8,
    epochs=350,
    shuffle=True,
    callbacks=[
        EarlyStopping(monitor='val_loss', patience=5),
        ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, min_lr=0.0001)
    ]
)
Epoch 1/350
900/900 [==============================] - 15s 13ms/step - loss: 19.2268 - val_loss: 42.0530 - lr: 0.0010
Epoch 2/350
900/900 [==============================] - 10s 11ms/step - loss: 5.9539 - val_loss: 21.8673 - lr: 0.0010
Epoch 3/350
900/900 [==============================] - 10s 11ms/step - loss: 1.8953 - val_loss: 11.9836 - lr: 0.0010
Epoch 4/350
900/900 [==============================] - 12s 13ms/step - loss: 1.0453 - val_loss: 7.9481 - lr: 0.0010
Epoch 5/350
900/900 [==============================] - 11s 12ms/step - loss: 0.8444 - val_loss: 6.3535 - lr: 0.0010
Epoch 6/350
900/900 [==============================] - 10s 11ms/step - loss: 0.7501 - val_loss: 5.6213 - lr: 0.0010
Epoch 7/350
900/900 [==============================] - 9s 10ms/step - loss: 0.6863 - val_loss: 5.3713 - lr: 0.0010
Epoch 8/350
900/900 [==============================] - 12s 13ms/step - loss: 0.6383 - val_loss: 5.0735 - lr: 0.0010
Epoch 9/350
900/900 [==============================] - 11s 12ms/step - loss: 0.6063 - val_loss: 4.9584 - lr: 0.0010
Epoch 10/350
900/900 [==============================] - 9s 10ms/step - loss: 0.5673 - val_loss: 5.5064 - lr: 0.0010
Epoch 11/350
900/900 [==============================] - 9s 10ms/step - loss: 0.5476 - val_loss: 4.8842 - lr: 0.0010
Epoch 12/350
900/900 [==============================] - 9s 10ms/step - loss: 0.5189 - val_loss: 4.8164 - lr: 0.0010
Epoch 13/350
900/900 [==============================] - 10s 11ms/step - loss: 0.5060 - val_loss: 4.5964 - lr: 0.0010
Epoch 14/350
900/900 [==============================] - 9s 10ms/step - loss: 0.4958 - val_loss: 4.4385 - lr: 0.0010
Epoch 15/350
900/900 [==============================] - 10s 11ms/step - loss: 0.4765 - val_loss: 4.3941 - lr: 0.0010
Epoch 16/350
900/900 [==============================] - 9s 10ms/step - loss: 0.4595 - val_loss: 4.3836 - lr: 0.0010
Epoch 17/350
900/900 [==============================] - 9s 10ms/step - loss: 0.4499 - val_loss: 4.6375 - lr: 0.0010
Epoch 18/350
900/900 [==============================] - 9s 10ms/step - loss: 0.4343 - val_loss: 4.1039 - lr: 0.0010
Epoch 19/350
900/900 [==============================] - 10s 11ms/step - loss: 0.4234 - val_loss: 4.4430 - lr: 0.0010
Epoch 20/350
900/900 [==============================] - 14s 15ms/step - loss: 0.4182 - val_loss: 4.1976 - lr: 0.0010
Epoch 21/350
900/900 [==============================] - 12s 13ms/step - loss: 0.4104 - val_loss: 4.3439 - lr: 0.0010
Epoch 22/350
900/900 [==============================] - 12s 14ms/step - loss: 0.3992 - val_loss: 4.3010 - lr: 0.0010
Epoch 23/350
900/900 [==============================] - 12s 14ms/step - loss: 0.3901 - val_loss: 4.5941 - lr: 0.0010
In [17]:
# plotting loss
fig, ax = plt.subplots(figsize=(8, 6))

ax.plot(history.history['loss'], label='Training loss')
ax.plot(history.history['val_loss'], label='Validation loss')

ax.legend()
ax.set(
    ylabel='Loss',
    xlabel='Epoch'
)

plt.legend()
plt.tight_layout()
plt.show()
In [18]:
# predicting
y_pred = model.predict(test_seqs)
63/63 [==============================] - 1s 5ms/step
In [19]:
erro = y_test - y_pred
In [20]:
fig, ax = plt.subplots(figsize=(12, 6))

ax.bar(
    x=range(len(erro)),
    height=erro.flatten()
)

ax.set(
    title='Erro da Predição',
    ylabel='$Erro$',
    xlabel='$Interações$'
)
plt.show()

Questão 5¶

Considere quatro distribuições gaussianas, $C_1, C_2, C_3, C_4$, em um espaço de entrada de dimensionalidade igual a oito, isto é x = ($x_1, x_2, ..., x_8)^t$ .Todas as nuvens de dados formadas têm variâncias unitária, mas centros ou vetores média são diferentes e dados por $m_1 = (0,0,0,0,0,0,0,0)^t , m_2 = (4,0,0,0,0,0,0,0)^t, m_3 = (0,0,0,4,0,0,0,0)^t, m_4 = (0,0,0,0,0,0,0,4)^t $.

Utilize uma rede de autoeconders para reduzir a dimensionalidade dos dados para duas di- mensões. O objetivo é visualizar os dados de dimensão 8 em um espaço de dimensão 2. Esboce os dados neste novo espaço.

Observação: Gere inicialmente os dados em dimensão oito para cada uma das distribuições gaussianas. Selecione o conjunto de treinamento. Defina uma rede de autoencoder com uma arquitetura por exemplo do tipo 8:2:8 ou outro equivalente com mais camadas mas que reduza para 2 dimensões. Após o treinamento faça a redução de dimensionalidade com a rede de arquitetura 8:2 por exemplo.

In [21]:
from tensorflow.keras import Model
In [22]:
# generate four gaussian distributions with 8 dimensions, mean m1 = (0,0,0,0,0,0,0,0), m2 = (4,0,0,0,0,0,0,0), m3 = (0,0,0,4,0,0,0,0), m4 = (0,0,0,0,0,0,0,4) and covariance matrix I
m1 = np.zeros(8)
m2 = np.array([4,0,0,0,0,0,0,0])
m3 = np.array([0,0,0,4,0,0,0,0])
m4 = np.array([0,0,0,0,0,0,0,4])
I = np.eye(8)
x1 = np.random.multivariate_normal(m1, I, 15000)
x2 = np.random.multivariate_normal(m2, I, 15000)
x3 = np.random.multivariate_normal(m3, I, 15000)
x4 = np.random.multivariate_normal(m4, I, 15000)
In [23]:
# plot the data
fig, axes = plt.subplots(ncols=2, figsize=(25, 6))
axes[0].scatter(x1[:,0], x1[:,1], label='Class 1')
axes[0].scatter(x2[:,0], x2[:,1], label='Class 2')
axes[0].scatter(x3[:,4], x3[:,3], label='Class 3')
axes[0].scatter(x4[:,-1], x4[:,-2], label='Class 4')
axes[0].legend()
axes[0].set(
    ylabel='$x_2$',
    xlabel='$x_1$'
)
plt.show()
In [35]:
# create an autoencoder netork to reduce the dimensionality of the data to 2 and plot the data
input_dim = Input(shape=(8,))
encoded = Dense(8, activation='leaky_relu')(input_dim)
encoded = Dense(4, activation='leaky_relu')(encoded)
encoded = Dense(2, activation='leaky_relu')(encoded)
decoded = Dense(4, activation='leaky_relu')(encoded)
decoded = Dense(8, activation='leaky_relu')(decoded)

autoencoder = Model(input_dim, decoded)
encoder = Model(input_dim, encoded)
In [36]:
opt = tf.keras.optimizers.Adam(learning_rate=0.0001)
autoencoder.compile(optimizer=opt, loss='mean_squared_error')

# fitting the model
history1 = autoencoder.fit(x1, x1, epochs=50, shuffle=True, validation_split=0.1)
history2 = autoencoder.fit(x2, x2, epochs=50, shuffle=True, validation_split=0.1)
history3 = autoencoder.fit(x3, x3, epochs=50, shuffle=True, validation_split=0.1)
history4 = autoencoder.fit(x4, x4, epochs=50, shuffle=True, validation_split=0.1)
Epoch 1/50
422/422 [==============================] - 1s 2ms/step - loss: 0.9627 - val_loss: 0.9539
Epoch 2/50
422/422 [==============================] - 1s 1ms/step - loss: 0.9439 - val_loss: 0.9409
Epoch 3/50
422/422 [==============================] - 1s 1ms/step - loss: 0.9354 - val_loss: 0.9344
Epoch 4/50
422/422 [==============================] - 1s 1ms/step - loss: 0.9297 - val_loss: 0.9282
Epoch 5/50
422/422 [==============================] - 1s 1ms/step - loss: 0.9237 - val_loss: 0.9208
Epoch 6/50
422/422 [==============================] - 1s 1ms/step - loss: 0.9162 - val_loss: 0.9117
Epoch 7/50
422/422 [==============================] - 1s 1ms/step - loss: 0.9075 - val_loss: 0.9017
Epoch 8/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8981 - val_loss: 0.8916
Epoch 9/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8890 - val_loss: 0.8830
Epoch 10/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8812 - val_loss: 0.8763
Epoch 11/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8756 - val_loss: 0.8717
Epoch 12/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8718 - val_loss: 0.8684
Epoch 13/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8688 - val_loss: 0.8657
Epoch 14/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8663 - val_loss: 0.8633
Epoch 15/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8640 - val_loss: 0.8611
Epoch 16/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8617 - val_loss: 0.8589
Epoch 17/50
422/422 [==============================] - 0s 1ms/step - loss: 0.8595 - val_loss: 0.8566
Epoch 18/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8572 - val_loss: 0.8542
Epoch 19/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8549 - val_loss: 0.8519
Epoch 20/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8525 - val_loss: 0.8495
Epoch 21/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8501 - val_loss: 0.8470
Epoch 22/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8477 - val_loss: 0.8446
Epoch 23/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8452 - val_loss: 0.8419
Epoch 24/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8426 - val_loss: 0.8394
Epoch 25/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8399 - val_loss: 0.8368
Epoch 26/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8373 - val_loss: 0.8342
Epoch 27/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8346 - val_loss: 0.8316
Epoch 28/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8319 - val_loss: 0.8289
Epoch 29/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8292 - val_loss: 0.8262
Epoch 30/50
422/422 [==============================] - 1s 2ms/step - loss: 0.8266 - val_loss: 0.8235
Epoch 31/50
422/422 [==============================] - 1s 3ms/step - loss: 0.8239 - val_loss: 0.8208
Epoch 32/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8213 - val_loss: 0.8181
Epoch 33/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8187 - val_loss: 0.8154
Epoch 34/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8159 - val_loss: 0.8127
Epoch 35/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8132 - val_loss: 0.8101
Epoch 36/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8107 - val_loss: 0.8077
Epoch 37/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8082 - val_loss: 0.8051
Epoch 38/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8057 - val_loss: 0.8027
Epoch 39/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8032 - val_loss: 0.8001
Epoch 40/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8008 - val_loss: 0.7975
Epoch 41/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7985 - val_loss: 0.7951
Epoch 42/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7963 - val_loss: 0.7928
Epoch 43/50
422/422 [==============================] - 0s 1ms/step - loss: 0.7941 - val_loss: 0.7906
Epoch 44/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7920 - val_loss: 0.7884
Epoch 45/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7900 - val_loss: 0.7863
Epoch 46/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7879 - val_loss: 0.7844
Epoch 47/50
422/422 [==============================] - 0s 1ms/step - loss: 0.7859 - val_loss: 0.7825
Epoch 48/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7840 - val_loss: 0.7808
Epoch 49/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7823 - val_loss: 0.7794
Epoch 50/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7807 - val_loss: 0.7779
Epoch 1/50
422/422 [==============================] - 1s 1ms/step - loss: 1.5929 - val_loss: 0.9509
Epoch 2/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8839 - val_loss: 0.8212
Epoch 3/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8232 - val_loss: 0.7883
Epoch 4/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8011 - val_loss: 0.7718
Epoch 5/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7901 - val_loss: 0.7633
Epoch 6/50
422/422 [==============================] - 0s 1ms/step - loss: 0.7831 - val_loss: 0.7576
Epoch 7/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7776 - val_loss: 0.7530
Epoch 8/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7732 - val_loss: 0.7494
Epoch 9/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7695 - val_loss: 0.7460
Epoch 10/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7662 - val_loss: 0.7431
Epoch 11/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7634 - val_loss: 0.7406
Epoch 12/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7609 - val_loss: 0.7383
Epoch 13/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7587 - val_loss: 0.7362
Epoch 14/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7569 - val_loss: 0.7346
Epoch 15/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7553 - val_loss: 0.7330
Epoch 16/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7539 - val_loss: 0.7318
Epoch 17/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7527 - val_loss: 0.7308
Epoch 18/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7517 - val_loss: 0.7296
Epoch 19/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7508 - val_loss: 0.7290
Epoch 20/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7501 - val_loss: 0.7283
Epoch 21/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7495 - val_loss: 0.7279
Epoch 22/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7490 - val_loss: 0.7275
Epoch 23/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7486 - val_loss: 0.7271
Epoch 24/50
422/422 [==============================] - 0s 1ms/step - loss: 0.7482 - val_loss: 0.7266
Epoch 25/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7479 - val_loss: 0.7264
Epoch 26/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7476 - val_loss: 0.7259
Epoch 27/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7474 - val_loss: 0.7258
Epoch 28/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7472 - val_loss: 0.7259
Epoch 29/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7470 - val_loss: 0.7253
Epoch 30/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7468 - val_loss: 0.7253
Epoch 31/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7467 - val_loss: 0.7254
Epoch 32/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7465 - val_loss: 0.7250
Epoch 33/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7464 - val_loss: 0.7248
Epoch 34/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7463 - val_loss: 0.7249
Epoch 35/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7462 - val_loss: 0.7247
Epoch 36/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7461 - val_loss: 0.7248
Epoch 37/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7459 - val_loss: 0.7245
Epoch 38/50
422/422 [==============================] - 0s 1ms/step - loss: 0.7459 - val_loss: 0.7244
Epoch 39/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7458 - val_loss: 0.7244
Epoch 40/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7457 - val_loss: 0.7242
Epoch 41/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7456 - val_loss: 0.7241
Epoch 42/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7455 - val_loss: 0.7241
Epoch 43/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7454 - val_loss: 0.7239
Epoch 44/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7453 - val_loss: 0.7240
Epoch 45/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7452 - val_loss: 0.7239
Epoch 46/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7452 - val_loss: 0.7241
Epoch 47/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7451 - val_loss: 0.7236
Epoch 48/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7450 - val_loss: 0.7237
Epoch 49/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7449 - val_loss: 0.7234
Epoch 50/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7449 - val_loss: 0.7235
Epoch 1/50
422/422 [==============================] - 1s 1ms/step - loss: 1.5972 - val_loss: 1.4005
Epoch 2/50
422/422 [==============================] - 1s 1ms/step - loss: 1.3020 - val_loss: 1.1859
Epoch 3/50
422/422 [==============================] - 1s 1ms/step - loss: 1.1143 - val_loss: 1.0354
Epoch 4/50
422/422 [==============================] - 1s 1ms/step - loss: 0.9829 - val_loss: 0.9314
Epoch 5/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8936 - val_loss: 0.8628
Epoch 6/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8432 - val_loss: 0.8324
Epoch 7/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8239 - val_loss: 0.8214
Epoch 8/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8155 - val_loss: 0.8149
Epoch 9/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8100 - val_loss: 0.8099
Epoch 10/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8056 - val_loss: 0.8059
Epoch 11/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8020 - val_loss: 0.8021
Epoch 12/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7989 - val_loss: 0.7989
Epoch 13/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7962 - val_loss: 0.7958
Epoch 14/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7937 - val_loss: 0.7933
Epoch 15/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7915 - val_loss: 0.7911
Epoch 16/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7895 - val_loss: 0.7890
Epoch 17/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7879 - val_loss: 0.7869
Epoch 18/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7863 - val_loss: 0.7852
Epoch 19/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7848 - val_loss: 0.7834
Epoch 20/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7834 - val_loss: 0.7821
Epoch 21/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7821 - val_loss: 0.7806
Epoch 22/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7809 - val_loss: 0.7787
Epoch 23/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7797 - val_loss: 0.7780
Epoch 24/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7787 - val_loss: 0.7763
Epoch 25/50
422/422 [==============================] - 0s 1ms/step - loss: 0.7777 - val_loss: 0.7746
Epoch 26/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7767 - val_loss: 0.7738
Epoch 27/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7759 - val_loss: 0.7720
Epoch 28/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7750 - val_loss: 0.7708
Epoch 29/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7743 - val_loss: 0.7695
Epoch 30/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7735 - val_loss: 0.7687
Epoch 31/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7728 - val_loss: 0.7677
Epoch 32/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7721 - val_loss: 0.7665
Epoch 33/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7715 - val_loss: 0.7657
Epoch 34/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7709 - val_loss: 0.7650
Epoch 35/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7703 - val_loss: 0.7640
Epoch 36/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7697 - val_loss: 0.7637
Epoch 37/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7693 - val_loss: 0.7629
Epoch 38/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7688 - val_loss: 0.7622
Epoch 39/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7683 - val_loss: 0.7616
Epoch 40/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7679 - val_loss: 0.7611
Epoch 41/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7675 - val_loss: 0.7607
Epoch 42/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7670 - val_loss: 0.7599
Epoch 43/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7667 - val_loss: 0.7595
Epoch 44/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7662 - val_loss: 0.7592
Epoch 45/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7658 - val_loss: 0.7587
Epoch 46/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7655 - val_loss: 0.7581
Epoch 47/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7651 - val_loss: 0.7578
Epoch 48/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7647 - val_loss: 0.7573
Epoch 49/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7643 - val_loss: 0.7570
Epoch 50/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7640 - val_loss: 0.7563
Epoch 1/50
422/422 [==============================] - 1s 1ms/step - loss: 2.2073 - val_loss: 1.3175
Epoch 2/50
422/422 [==============================] - 1s 1ms/step - loss: 1.1305 - val_loss: 0.9906
Epoch 3/50
422/422 [==============================] - 1s 1ms/step - loss: 0.9453 - val_loss: 0.8817
Epoch 4/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8663 - val_loss: 0.8280
Epoch 5/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8271 - val_loss: 0.8015
Epoch 6/50
422/422 [==============================] - 1s 1ms/step - loss: 0.8062 - val_loss: 0.7860
Epoch 7/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7931 - val_loss: 0.7751
Epoch 8/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7842 - val_loss: 0.7684
Epoch 9/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7784 - val_loss: 0.7639
Epoch 10/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7746 - val_loss: 0.7613
Epoch 11/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7718 - val_loss: 0.7587
Epoch 12/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7696 - val_loss: 0.7571
Epoch 13/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7678 - val_loss: 0.7551
Epoch 14/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7663 - val_loss: 0.7540
Epoch 15/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7651 - val_loss: 0.7527
Epoch 16/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7639 - val_loss: 0.7518
Epoch 17/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7629 - val_loss: 0.7508
Epoch 18/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7620 - val_loss: 0.7500
Epoch 19/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7612 - val_loss: 0.7491
Epoch 20/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7605 - val_loss: 0.7484
Epoch 21/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7598 - val_loss: 0.7478
Epoch 22/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7592 - val_loss: 0.7472
Epoch 23/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7586 - val_loss: 0.7464
Epoch 24/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7580 - val_loss: 0.7458
Epoch 25/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7574 - val_loss: 0.7452
Epoch 26/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7569 - val_loss: 0.7449
Epoch 27/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7564 - val_loss: 0.7441
Epoch 28/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7559 - val_loss: 0.7439
Epoch 29/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7554 - val_loss: 0.7432
Epoch 30/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7550 - val_loss: 0.7428
Epoch 31/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7546 - val_loss: 0.7424
Epoch 32/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7541 - val_loss: 0.7420
Epoch 33/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7538 - val_loss: 0.7415
Epoch 34/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7534 - val_loss: 0.7411
Epoch 35/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7530 - val_loss: 0.7409
Epoch 36/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7527 - val_loss: 0.7404
Epoch 37/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7522 - val_loss: 0.7400
Epoch 38/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7519 - val_loss: 0.7395
Epoch 39/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7515 - val_loss: 0.7391
Epoch 40/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7512 - val_loss: 0.7388
Epoch 41/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7509 - val_loss: 0.7385
Epoch 42/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7506 - val_loss: 0.7380
Epoch 43/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7504 - val_loss: 0.7377
Epoch 44/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7500 - val_loss: 0.7380
Epoch 45/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7498 - val_loss: 0.7371
Epoch 46/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7495 - val_loss: 0.7368
Epoch 47/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7492 - val_loss: 0.7365
Epoch 48/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7490 - val_loss: 0.7362
Epoch 49/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7487 - val_loss: 0.7360
Epoch 50/50
422/422 [==============================] - 1s 1ms/step - loss: 0.7484 - val_loss: 0.7358
In [37]:
encoded_x1 = encoder.predict(x1)
encoded_x2 = encoder.predict(x2)
encoded_x3 = encoder.predict(x3)
encoded_x4 = encoder.predict(x4)
469/469 [==============================] - 0s 751us/step
469/469 [==============================] - 0s 781us/step
469/469 [==============================] - 0s 789us/step
469/469 [==============================] - 0s 749us/step
In [38]:
# plot history
fig, axes = plt.subplots(ncols=4, figsize=(25, 6))
axes[0].plot(history1.history['loss'], label='Training loss')
axes[0].plot(history1.history['val_loss'], label='Validation loss')
axes[1].plot(history2.history['loss'], label='Training loss')
axes[1].plot(history2.history['val_loss'], label='Validation loss')
axes[2].plot(history3.history['loss'], label='Training loss')
axes[2].plot(history3.history['val_loss'], label='Validation loss')
axes[3].plot(history4.history['loss'], label='Training loss')
axes[3].plot(history4.history['val_loss'], label='Validation loss')
axes[0].legend()
axes[0].set_title('Class 1')
axes[0].set(
    ylabel='Loss',
    xlabel='Epoch'
)
axes[1].set_title('Class 2')
axes[1].legend()
axes[1].set(
    ylabel='Loss',
    xlabel='Epoch'
)
axes[2].set_title('Class 3')
axes[2].legend()
axes[2].set(
    ylabel='Loss',
    xlabel='Epoch'
)
axes[3].set_title('Class 4')
axes[3].legend()
axes[3].set(
    ylabel='Loss',
    xlabel='Epoch'
)
plt.show()
In [44]:
# plot the data
fig, ax = plt.subplots(figsize=(10,10))

ax.scatter(encoded_x1[:, 0][0:2000], encoded_x1[:, 1][0:2000], label='Classe 1')
ax.scatter(encoded_x2[:, 0][0:2000], encoded_x2[:, 1][0:2000], label='Classe 2')
ax.scatter(encoded_x3[:, 0][0:2000], encoded_x3[:, 1][0:2000], label='Classe 3')
ax.scatter(encoded_x4[:, 0][0:2000], encoded_x4[:, 1][0:2000], label='Classe 4')

ax.legend()
plt.show()

Questão 6¶

Pesquise sobre redes neurais recorrentes LSTM. Apresente neste estudo aplicações das LSTM deep learning. Seguem abaixo sugestões de aplicações.

  1. Predição de series temporais (exemplo: predição de palavras no texto, ou predição de ações na bolsa de valores, etc.)
  2. Reconhecimento de voz
  3. Processamento de Linguagem Natural
  4. Outra aplicações de livre escolha

As redes neurais LSTM (Long Short-Term Memory) são um tipo específico de redes neurais recorrentes que receberam muita atenção recentemente na comunidade de aprendizado de máquina. De forma geral, as redes LSTM possuem a característica de realimentação, o que permite um efeito de memória de curto e longo prazo. Com isso, a saída é modulada pelo estado dessas células, tornando-se uma propriedade muito importante quando precisamos que a previsão da rede neural dependa do contexto histórico das entradas, e não apenas da última entrada.

Source: [Didática Tech](https://didatica.tech/lstm-long-short-term-memory/)

Exemplo de uma aplicação

Uma das aplicações das redes neurais LSTM é a predição de séries temporais, isto é, tentar prever o próximo valor a partir de valores anteriores. Assim, escolheu-se um problema onde, dado um ano e um mês, a tarefa é prever o número de passageiros de companhias aéreas internacionais em unidades de 1.000. Os dados variam de janeiro de 1949 a dezembro de 1960, ou 12 anos, com 144 observações.

Ou seja, dado o número de passageiros (em unidades de milhares) neste mês, qual é o número de passageiros no próximo mês?

Você pode escrever uma função simples para converter a única coluna de dados em um conjunto de dados de duas colunas: a primeira coluna contendo a contagem de passageiros deste mês (t) e a segunda coluna contendo a contagem de passageiros do próximo mês (t+1) a ser prevista.

In [ ]:
# LSTM for international airline passengers problem with regression framing
import numpy as np
import matplotlib.pyplot as plt
from pandas import read_csv
import math
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
In [ ]:
# convert an array of values into a dataset matrix
def create_dataset(dataset, look_back=1):
	dataX, dataY = [], []
	for i in range(len(dataset)-look_back-1):
		a = dataset[i:(i+look_back), 0]
		dataX.append(a)
		dataY.append(dataset[i + look_back, 0])
	return np.array(dataX), np.array(dataY)
In [ ]:
# fix random seed for reproducibility
tf.random.set_seed(7)
In [ ]:
# load the dataset
dataframe = read_csv('airline-passengers.csv', usecols=[1], engine='python')
dataset = dataframe.values
dataset = dataset.astype('float32')
In [ ]:
# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
In [ ]:
# split into train and test sets
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
In [ ]:
print("Train shape: ", train.shape)
print("Test shape: ", test.shape)
Train shape:  (96, 1)
Test shape:  (48, 1)
In [ ]:
# reshape into X=t and Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
In [ ]:
# reshape input to be [samples, time steps, features]
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
In [ ]:
# create and fit the LSTM network
model = Sequential()
model.add(LSTM(4, input_shape=(1, look_back)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(trainX, trainY, epochs=100, batch_size=1, verbose=2)
Epoch 1/100
94/94 - 2s - loss: 0.0431 - 2s/epoch - 24ms/step
Epoch 2/100
94/94 - 0s - loss: 0.0229 - 224ms/epoch - 2ms/step
Epoch 3/100
94/94 - 0s - loss: 0.0163 - 212ms/epoch - 2ms/step
Epoch 4/100
94/94 - 0s - loss: 0.0148 - 232ms/epoch - 2ms/step
Epoch 5/100
94/94 - 0s - loss: 0.0140 - 217ms/epoch - 2ms/step
Epoch 6/100
94/94 - 0s - loss: 0.0130 - 217ms/epoch - 2ms/step
Epoch 7/100
94/94 - 0s - loss: 0.0121 - 217ms/epoch - 2ms/step
Epoch 8/100
94/94 - 0s - loss: 0.0112 - 226ms/epoch - 2ms/step
Epoch 9/100
94/94 - 0s - loss: 0.0104 - 211ms/epoch - 2ms/step
Epoch 10/100
94/94 - 0s - loss: 0.0094 - 220ms/epoch - 2ms/step
Epoch 11/100
94/94 - 0s - loss: 0.0084 - 211ms/epoch - 2ms/step
Epoch 12/100
94/94 - 0s - loss: 0.0074 - 219ms/epoch - 2ms/step
Epoch 13/100
94/94 - 0s - loss: 0.0066 - 221ms/epoch - 2ms/step
Epoch 14/100
94/94 - 0s - loss: 0.0057 - 217ms/epoch - 2ms/step
Epoch 15/100
94/94 - 0s - loss: 0.0050 - 214ms/epoch - 2ms/step
Epoch 16/100
94/94 - 0s - loss: 0.0043 - 214ms/epoch - 2ms/step
Epoch 17/100
94/94 - 0s - loss: 0.0037 - 220ms/epoch - 2ms/step
Epoch 18/100
94/94 - 0s - loss: 0.0032 - 216ms/epoch - 2ms/step
Epoch 19/100
94/94 - 0s - loss: 0.0029 - 210ms/epoch - 2ms/step
Epoch 20/100
94/94 - 0s - loss: 0.0026 - 220ms/epoch - 2ms/step
Epoch 21/100
94/94 - 0s - loss: 0.0024 - 212ms/epoch - 2ms/step
Epoch 22/100
94/94 - 0s - loss: 0.0022 - 226ms/epoch - 2ms/step
Epoch 23/100
94/94 - 0s - loss: 0.0021 - 208ms/epoch - 2ms/step
Epoch 24/100
94/94 - 0s - loss: 0.0020 - 218ms/epoch - 2ms/step
Epoch 25/100
94/94 - 0s - loss: 0.0021 - 217ms/epoch - 2ms/step
Epoch 26/100
94/94 - 0s - loss: 0.0020 - 208ms/epoch - 2ms/step
Epoch 27/100
94/94 - 0s - loss: 0.0020 - 220ms/epoch - 2ms/step
Epoch 28/100
94/94 - 0s - loss: 0.0020 - 215ms/epoch - 2ms/step
Epoch 29/100
94/94 - 0s - loss: 0.0021 - 219ms/epoch - 2ms/step
Epoch 30/100
94/94 - 0s - loss: 0.0020 - 210ms/epoch - 2ms/step
Epoch 31/100
94/94 - 0s - loss: 0.0020 - 222ms/epoch - 2ms/step
Epoch 32/100
94/94 - 0s - loss: 0.0021 - 221ms/epoch - 2ms/step
Epoch 33/100
94/94 - 0s - loss: 0.0021 - 213ms/epoch - 2ms/step
Epoch 34/100
94/94 - 0s - loss: 0.0020 - 214ms/epoch - 2ms/step
Epoch 35/100
94/94 - 0s - loss: 0.0021 - 213ms/epoch - 2ms/step
Epoch 36/100
94/94 - 0s - loss: 0.0020 - 224ms/epoch - 2ms/step
Epoch 37/100
94/94 - 0s - loss: 0.0020 - 207ms/epoch - 2ms/step
Epoch 38/100
94/94 - 0s - loss: 0.0020 - 213ms/epoch - 2ms/step
Epoch 39/100
94/94 - 0s - loss: 0.0020 - 219ms/epoch - 2ms/step
Epoch 40/100
94/94 - 0s - loss: 0.0020 - 223ms/epoch - 2ms/step
Epoch 41/100
94/94 - 0s - loss: 0.0020 - 216ms/epoch - 2ms/step
Epoch 42/100
94/94 - 0s - loss: 0.0020 - 216ms/epoch - 2ms/step
Epoch 43/100
94/94 - 0s - loss: 0.0020 - 212ms/epoch - 2ms/step
Epoch 44/100
94/94 - 0s - loss: 0.0020 - 210ms/epoch - 2ms/step
Epoch 45/100
94/94 - 0s - loss: 0.0021 - 216ms/epoch - 2ms/step
Epoch 46/100
94/94 - 0s - loss: 0.0019 - 229ms/epoch - 2ms/step
Epoch 47/100
94/94 - 0s - loss: 0.0020 - 216ms/epoch - 2ms/step
Epoch 48/100
94/94 - 0s - loss: 0.0021 - 216ms/epoch - 2ms/step
Epoch 49/100
94/94 - 0s - loss: 0.0020 - 211ms/epoch - 2ms/step
Epoch 50/100
94/94 - 0s - loss: 0.0020 - 219ms/epoch - 2ms/step
Epoch 51/100
94/94 - 0s - loss: 0.0020 - 213ms/epoch - 2ms/step
Epoch 52/100
94/94 - 0s - loss: 0.0020 - 209ms/epoch - 2ms/step
Epoch 53/100
94/94 - 0s - loss: 0.0020 - 211ms/epoch - 2ms/step
Epoch 54/100
94/94 - 0s - loss: 0.0020 - 219ms/epoch - 2ms/step
Epoch 55/100
94/94 - 0s - loss: 0.0020 - 210ms/epoch - 2ms/step
Epoch 56/100
94/94 - 0s - loss: 0.0020 - 215ms/epoch - 2ms/step
Epoch 57/100
94/94 - 0s - loss: 0.0020 - 213ms/epoch - 2ms/step
Epoch 58/100
94/94 - 0s - loss: 0.0020 - 214ms/epoch - 2ms/step
Epoch 59/100
94/94 - 0s - loss: 0.0020 - 219ms/epoch - 2ms/step
Epoch 60/100
94/94 - 0s - loss: 0.0020 - 212ms/epoch - 2ms/step
Epoch 61/100
94/94 - 0s - loss: 0.0020 - 217ms/epoch - 2ms/step
Epoch 62/100
94/94 - 0s - loss: 0.0021 - 213ms/epoch - 2ms/step
Epoch 63/100
94/94 - 0s - loss: 0.0020 - 211ms/epoch - 2ms/step
Epoch 64/100
94/94 - 0s - loss: 0.0020 - 221ms/epoch - 2ms/step
Epoch 65/100
94/94 - 0s - loss: 0.0020 - 224ms/epoch - 2ms/step
Epoch 66/100
94/94 - 0s - loss: 0.0020 - 217ms/epoch - 2ms/step
Epoch 67/100
94/94 - 0s - loss: 0.0021 - 209ms/epoch - 2ms/step
Epoch 68/100
94/94 - 0s - loss: 0.0020 - 219ms/epoch - 2ms/step
Epoch 69/100
94/94 - 0s - loss: 0.0020 - 209ms/epoch - 2ms/step
Epoch 70/100
94/94 - 0s - loss: 0.0020 - 214ms/epoch - 2ms/step
Epoch 71/100
94/94 - 0s - loss: 0.0020 - 214ms/epoch - 2ms/step
Epoch 72/100
94/94 - 0s - loss: 0.0020 - 216ms/epoch - 2ms/step
Epoch 73/100
94/94 - 0s - loss: 0.0020 - 222ms/epoch - 2ms/step
Epoch 74/100
94/94 - 0s - loss: 0.0021 - 229ms/epoch - 2ms/step
Epoch 75/100
94/94 - 0s - loss: 0.0020 - 227ms/epoch - 2ms/step
Epoch 76/100
94/94 - 0s - loss: 0.0020 - 211ms/epoch - 2ms/step
Epoch 77/100
94/94 - 0s - loss: 0.0019 - 218ms/epoch - 2ms/step
Epoch 78/100
94/94 - 0s - loss: 0.0020 - 216ms/epoch - 2ms/step
Epoch 79/100
94/94 - 0s - loss: 0.0020 - 209ms/epoch - 2ms/step
Epoch 80/100
94/94 - 0s - loss: 0.0020 - 211ms/epoch - 2ms/step
Epoch 81/100
94/94 - 0s - loss: 0.0019 - 216ms/epoch - 2ms/step
Epoch 82/100
94/94 - 0s - loss: 0.0021 - 220ms/epoch - 2ms/step
Epoch 83/100
94/94 - 0s - loss: 0.0020 - 211ms/epoch - 2ms/step
Epoch 84/100
94/94 - 0s - loss: 0.0019 - 217ms/epoch - 2ms/step
Epoch 85/100
94/94 - 0s - loss: 0.0020 - 214ms/epoch - 2ms/step
Epoch 86/100
94/94 - 0s - loss: 0.0020 - 223ms/epoch - 2ms/step
Epoch 87/100
94/94 - 0s - loss: 0.0021 - 235ms/epoch - 2ms/step
Epoch 88/100
94/94 - 0s - loss: 0.0020 - 231ms/epoch - 2ms/step
Epoch 89/100
94/94 - 0s - loss: 0.0020 - 213ms/epoch - 2ms/step
Epoch 90/100
94/94 - 0s - loss: 0.0020 - 217ms/epoch - 2ms/step
Epoch 91/100
94/94 - 0s - loss: 0.0020 - 228ms/epoch - 2ms/step
Epoch 92/100
94/94 - 0s - loss: 0.0020 - 210ms/epoch - 2ms/step
Epoch 93/100
94/94 - 0s - loss: 0.0020 - 226ms/epoch - 2ms/step
Epoch 94/100
94/94 - 0s - loss: 0.0020 - 209ms/epoch - 2ms/step
Epoch 95/100
94/94 - 0s - loss: 0.0021 - 216ms/epoch - 2ms/step
Epoch 96/100
94/94 - 0s - loss: 0.0020 - 219ms/epoch - 2ms/step
Epoch 97/100
94/94 - 0s - loss: 0.0020 - 214ms/epoch - 2ms/step
Epoch 98/100
94/94 - 0s - loss: 0.0020 - 215ms/epoch - 2ms/step
Epoch 99/100
94/94 - 0s - loss: 0.0020 - 213ms/epoch - 2ms/step
Epoch 100/100
94/94 - 0s - loss: 0.0020 - 221ms/epoch - 2ms/step
Out[ ]:
<keras.callbacks.History at 0x7f663c5adb10>
In [ ]:
# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
3/3 [==============================] - 0s 4ms/step
2/2 [==============================] - 0s 5ms/step
In [ ]:
# invert predictions
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
In [ ]:
# calculate root mean squared error
trainScore = np.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = np.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
Train Score: 22.68 RMSE
Test Score: 49.34 RMSE
In [ ]:
# shift train predictions for plotting
trainPredictPlot = np.empty_like(dataset)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
In [ ]:
# shift test predictions for plotting
testPredictPlot = np.empty_like(dataset)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict
In [ ]:
# plot baseline and predictions
plt.plot(scaler.inverse_transform(dataset))
plt.plot(trainPredictPlot)
plt.plot(testPredictPlot)
plt.show()

Questão 7¶

Apresente um estudo sobre transferência de conhecimento (transfer learning) no contexto de deep learning.

De acordo com Brownlee (2019), Transfer Learning é um método de aprendizagem de máquina no qual permite que um modelo treinado para uma determinada tarefa seja utilizado como um ponto de partida de um modelo para outro problema com características similares.

Esta técnica é bastante utilizada em tarefas de visão computacional, processamento de linguagem natural, dados os vastos recursos computacionais (principalmente as GPUs) e tempo necessários para desenvolver modelos de rede neural sobre esses problemas, além das emissões de CO2 causadas pelo gasto de energia. Além disso, é possível encontrar modelos treinados por grandes empresas por muitas horas e com uma grande infraestrutura disponíveis de forma gratuita na internet. Isso permite obter excelentes resultados com um tempo de treinamento muito menor do que seria treinar um novo modelo do zero.

É possível utilizar esta técnica a partir dos três passos a seguir:

  1. Selecione Modelo pré-treinado: Escolha um modelo de origem pré-treinado dentre os modelos disponíveis. Muitas instituições de pesquisa lançam modelos em conjuntos de dados grandes e desafiadores que podem ser incluídos no conjunto de modelos candidatos a serem escolhidos.

  2. Modelo de Reutilização: Utiliza o modelo pré-treinado como ponto de partida para um modelo na segunda tarefa de interesse. Isso pode envolver o uso de todo ou de partes do modelo, dependendo da técnica de modelagem utilizada.

  3. Modelo de sintonia: Opcionalmente, o modelo pode precisar ser adaptado ou refinado nos dados do par de entrada-saída disponíveis para a tarefa de interesse. Assim, você pode adicionar mais camadas ao final da rede para adptar ao seu problema.

Exemplos de modelos pré-treinados na área de visão computacional:

  • Oxford VGG Model
  • Google Inception Model
  • Microsoft ResNet Model

Exemplos de modelos pré-treinados na área de processamento de linguagem natural:

  • Google’s word2vec Model
  • Stanford’s GloVe Model

Referências:¶

  • Brownlee, Jason (2019). A Gentle Introduction to Transfer Learning for Deep Learning. Disponível em: https://machinelearningmastery.com/transfer-learning-for-deep-learning/